Re: [openstack-dev] [Ironic][Neutron] Testing of Ironic/Neutron integration on devstack

2015-11-27 Thread Kevin Benton
> I don't see any reason why it wouldn't work with VXLAN.Well with the current approach in your code you would have to basically re-implement all of the logic in the openvswitch agent that sets up VXLAN tunnels and isolation of nodes on different VXLAN networks (either via flows or local VLANs). That will be quite a bit of work.We can already leverage the L2 agent’s logic if, instead of setting VLANs/VXLANs directly in your driver, we just wire up a patch port to br-int using the correct name that the L2 agent expects for the Neutron port the bare metal VM has. Then the L2 agent does its normal logic to get the traffic onto the right VLAN/VXLAN/GRE/whatever.On Nov 26, 2015, at 2:56 AM, Vasyl Saienko  wrote:Hello Kevin,I've added some pictures that illustrates how it works with HW switch and with VMs on devstack. On Wed, Nov 25, 2015 at 10:53 PM, Kevin Benton  wrote:This is cool. I didn't know you were working on an OVS driver for testing in CI as well. :)Does this work by getting the port wired into OVS so the agent recognizes it like a regular port so it can be put into VXLAN/VLAN or whatever the node is configured with? From what I can tell it looks like it's on a completely different bridge so they wouldn't have connectivity to the rest of the network.Driver works with VLAN at the moment, I don't see any reason why it wouldn't work with VXLAN.Ironic VMs are created on devstack by [0]. They are not registered in Nova/Neutron so neutron-ovs-agent doesn't know anything about them.In single node devstack you can't launch regular nova VM instances since compute_driver=ironic doesn't allow this. They would have connectivity to rest of network via 'br-int'.I have some POC code[1] for 'baremetal' support directly in the OVS agent so ports get treated just like VM ports. However, it requires upstream changes so if yours accomplishes the same thing without any upstream changes, that will be the best way to go. In real setup neutron will plug baremetal server to specific network via ML2 driver.We should keep as much closer to real ironic use-case scenario in testing model. That is why we should have ML2 that allows us to interact with OVS. Perhaps we can merge your approach (config via ssh) with mine (getting the 'baremetal' ports wired up for real connectivity) so we don't need upstream changes.1. https://review.openstack.org/#/c/249265/Cheers,Kevin BentonOn Wed, Nov 25, 2015 at 7:27 AM, Vasyl Saienko  wrote:Hello Community,As you know Ironic/Neutron integration is planned in Mitaka. And at the moment we don't have any CI that will test it. Unfortunately we can't test Ironic/Neutron integration on HW as we don't have it.So probably the best way is to develop ML2 driver that will work with OVS.At the moment we have a PoC [1] of ML2 driver that works with Cisco and OVS on linux.Also we have some patches to devstack that allows to try Ironic/Neutron integration on VM and real HW. And quick guide how to test it locally [0]https://review.openstack.org/#/c/247513/https://review.openstack.org/#/c/248048/https://review.openstack.org/#/c/249717/https://review.openstack.org/#/c/248074/ I'm interested in Neutron/Ironic integration. It would be great if we have it in Mitaka.I'm asking Community to check [0] and [1] and share your thoughts. Also I would like to request a repo on openstack.org for [1][0] https://github.com/jumpojoy/ironic-neutron/blob/master/devstack/examples/ironic-neutron-vm.md[1] https://github.com/jumpojoy/generic_switch--SincerelyVasyl Saienko__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev-- Kevin Benton__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev[0] https://github.com/openstack-dev/devstack/blob/master/tools/ironic/scripts/create-node[1] https://review.openstack.org/#/c/249717--SincerelyVasyl Saienko__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Nova mid cycle details

2015-11-27 Thread Murray, Paul (HP Cloud)
The Nova Mitaka mid cycle meetup is in Bristol, UK at the Hewlett Packard 
Enterprise office.

The mid cycle wiki page is here:
https://wiki.openstack.org/wiki/Sprints/NovaMitakaSprint

Note that there is a web site for signing up for the event and booking hotel 
rooms at a reduced event rate here:
https://starcite.smarteventscloud.com/hpe/NovaMidCycleMeeting

If you want to book a room at the event rate you do need to register on that 
site.

There is also an Eventbrite event that was created before the above web site 
was available. Do not worry if you have registered using Eventbrite, we will 
recognize those registrations as well. But if you do want to book a room you 
will need to register again on the above site.

Paul

Paul Murray
Nova Technical Lead, HPE Cloud
Hewlett Packard Enterprise
+44 117 316 2527


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Ansible] Building a dev env with AIO

2015-11-27 Thread Anthony Chow
I am trying to build a development environment for OpenStack Ansible
project.

I have a Ubuntu desktop with 8GB of ram and is using vagrant to start a
14.04 VM so I can play around before setting the environment on the desktop.

Over the last few days I have followed the Step-by-Step guide and failed 3
times.  The last 2 times I failed in setting up the galera cluster.

I check http://github.com/openstack/openstack-ansible and does not seem to
see any obvious changes in the run-playbook.sh script.  Only the
bootstrap-aio.sh is updated a day ago.

Anyone has insight as to what I can do next?

thanks and have a nice weekend,

anthony.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][bugs] Grafana Dashboard for Bugs

2015-11-27 Thread Markus Zoeller
Paul Belanger  wrote on 11/26/2015 01:09:54 AM:

> From: Paul Belanger 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/26/2015 01:10 AM
> Subject: Re: [openstack-dev] [nova][infra][bugs] Grafana Dashboard for 
Bugs
> 
> On Tue, Nov 24, 2015 at 04:20:26PM +0100, Markus Zoeller wrote:
> > Background
> > ==
> > We have "grafana" for data visualization [1] and I'd like to introduce
> > a dashboard which shows data from our bug tracker Launchpad. Based on
> > ttx's code [2] for the "bugsquashing day stats" [3] I created a PoC 
> > (screenshot at [4]). The code which collects the data from Launchpad 
> > and pushes it into a statsd deamon is available at [5]*. Thanks to 
> > jeblair who showed me all the necessary pieces. I have chosen the
> > following namespace hierarchy for the metrics for statsd:
> > 
> > Metrics
> >   |- stats
> >|- gauges
> > |- launchpad
> >  |- bugs
> >   |- nova
> >|- new-by-tag
> >|- not-in-progress-by-importance
> >|- open-by-importance
> >|- open-by-status
> > 
> > The two reasons I've chosen it this way are:
> > 1) specify "launchpad" in case we will have multiple issue trackers
> >at the same time and want to differ between those two
> > 2) specify "nova" to seperate between the OpenStack projects
> > 
> > The code [5] I've written doesn't care about project specifics and can 

> > be used for the other projects (Neutron, Cinder, Glance, ...) as well
> > without any changes. Only the "config.js" file has to be changed if
> > a project wants to opt in.
> > 
> > Open points
> > ===
> > * Any feedback if the data [4] I've chosen would be helpfull to you?
> >
> This is way cool! After we talked the other day, I started thinking more 
about
> this. At first I didn't understand what you wanted to do, but after 
thinking
> about it more and seeing the graph you produced this is very powerful.

Yeah, showing something is often more useful than just talking :)

> > * Which OpenStack project has the right scope for the code [5]?
> >
> I'm sure -infra is a good home for it.  However, there could be some 
integration
> with stackalytics too, since it is already hitting launchpad and 
scraping stats.

Stackalytics says in its README file

"Stackalytics is a service that automatically analyzes OpenStack
development activities and displays statistics on contribution."

which made me believe that the metrics for bugs is within the scope for
the project. I've pushed a change set to Gerrit [1], let's see what
feedback will come.

[1] https://review.openstack.org/#/c/250903/

> > * I still have to create a grafyaml [6] file for that. I've build the
> >   PoC dashboard with grafana's GUI.
> >
> Count me in to help :)

Awesome, thanks! I'm going to do as much as I can myself and I will
only pester you when I'm pretty stuck.

> > * I haven't yet run the code for the novaclient project, that's why
> >   there is a "N/A" in the screenshot.
> > * I would need an infra-contact who would help me to have this script
> >   executed repeatedly in a (tbd) interval (30mins?).
> > 
> I don't mind helping with the infra bit, shouldn't be hard to find a 
node to put
> this one.
> 
> > References
> > ==
> > [1] http://grafana.openstack.org/
> > [2] 
> > 
http://git.openstack.org/cgit/openstack-infra/bugdaystats/tree/bugdaystats.py

> > [3] http://status.openstack.org/bugday/
> > [4] Screenshort of the PoC nova bugs dashboard (expires on 
2015-12-20):
> > http://www.tiikoni.com/tis/view/?id=7f3f191
> > [5] https://gist.github.com/anonymous/4368eb69059f11286fe9
> > [6] http://docs.openstack.org/infra/grafyaml/
> > 
> > Footnotes
> > =
> > * you can set ``target="syso"`` to print the data to the stdout 
without 
> >   the need to have a statsd deamon running
> > 
> > Regards, Markus Zoeller (markus_z)

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS-7 Transition Plan

2015-11-27 Thread Tomasz Napierala

> On 23 Nov 2015, at 23:57, Igor Kalnitsky  wrote:
> 
> Hey Dmitry,
> 
> Thank you for your effort. I believe it's a huge step forward that
> opens number of possibilities.
> 
>> Every container runs systemd as PID 1 process instead of
>> supervisord or application / daemon.
> 
> Taking into account that we're going to drop Docker containers, I
> think it was unnecessary complication of your work.

I was trying to find a place where it was agreed to drop containers, and 
failed. The only thread I’m aware of [0] does not seem to be closed and does 
not provide any clear decisions.


[0]http://lists.openstack.org/pipermail/openstack-dev/2015-November/079866.html

Regards,
-- 
Tomasz 'Zen' Napierala
Product Engineering - Poland







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposal for new core reviewer: ChangBo Guo

2015-11-27 Thread Davanum Srinivas
+1 from me! would be great to have you on board gcb

-- Dims

On Fri, Nov 27, 2015 at 11:23 AM, Victor Stinner  wrote:
> Hi,
>
> I noticed that ChangBo Guo (aka gcb) is very active in various Oslo
> projects, to write patches but also to review patches written by others. He
> attend Oslo meetings, welcome reviews, etc. It's a pleasure to work with
> him.
>
> To accelerate the Oslo development, I propose to invite ChangBo to become an
> Oslo core reviewer. I asked him privately and he already replied that it
> would be a honor for him.
>
> What do you think?
>
> ChangBo Guo's open changes:
>
> https://review.openstack.org/#/q/owner:glongw...@gmail.com+status:open,n,z
>
> ChangBo Guo's merged changes:
>
> https://review.openstack.org/#/q/owner:glongw...@gmail.com+status:merged,n,z
>
> Victor
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Proposal for new core reviewer: ChangBo Guo

2015-11-27 Thread Victor Stinner

Hi,

I noticed that ChangBo Guo (aka gcb) is very active in various Oslo 
projects, to write patches but also to review patches written by others. 
He attend Oslo meetings, welcome reviews, etc. It's a pleasure to work 
with him.


To accelerate the Oslo development, I propose to invite ChangBo to 
become an Oslo core reviewer. I asked him privately and he already 
replied that it would be a honor for him.


What do you think?

ChangBo Guo's open changes:

https://review.openstack.org/#/q/owner:glongw...@gmail.com+status:open,n,z

ChangBo Guo's merged changes:

https://review.openstack.org/#/q/owner:glongw...@gmail.com+status:merged,n,z

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-27 Thread 少合冯
2015-11-27 2:19 GMT+08:00 Daniel P. Berrange :

> On Thu, Nov 26, 2015 at 05:39:04PM +, Daniel P. Berrange wrote:
> > On Thu, Nov 26, 2015 at 11:55:31PM +0800, 少合冯 wrote:
> > > 3.  dynamically choose when to activate xbzrle compress for live
> migration.
> > >  This is the best.
> > >  xbzrle really wants to be used if the network is not able to keep
> up
> > > with the dirtying rate of the guest RAM.
> > >  But how do I check the coming migration fit this situation?
> >
> > FWIW, if we decide we want compression support in Nova, I think that
> > having the Nova libvirt driver dynamically decide when to use it is
> > the only viable approach. Unfortunately the way the QEMU support
> > is implemented makes it very hard to use, as QEMU forces you to decide
> > to use it upfront, at a time when you don't have any useful information
> > on which to make the decision :-(  To be useful IMHO, we really need
> > the ability to turn on compression on the fly for an existing active
> > migration process. ie, we'd start migration off and let it run and
> > only enable compression if we encounter problems with completion.
> > Sadly we can't do this with QEMU as it stands today :-(
> >
>
[Shaohe Feng]
Add more guys working on kernel/hypervisor in our loop.
Wonder whether there will be any good solutions to improve it in QEMU in
future.


> > Oh and of course we still need to address the issue of RAM usage and
> > communicating that need with the scheduler in order to avoid OOM
> > scenarios due to large compression cache.
> >
> > I tend to feel that the QEMU compression code is currently broken by
> > design and needs rework in QEMU before it can be pratically used in
> > an autonomous fashion :-(
>
> Actually thinking about it, there's not really any significant
> difference between Option 1 and Option 3. In both cases we want
> a nova.conf setting live_migration_compression=on|off to control
> whether we want to *permit* use  of compression.
>
> The only real difference between 1 & 3 is whether migration has
> compression enabled always, or whether we turn it on part way
> though migration.
>
> So although option 3 is our desired approach (which we can't
> actually implement due to QEMU limitations), option 1 could
> be made fairly similar if we start off with a very small
> compression cache size which would have the effect of more or
> less disabling compression initially.
>
> We already have logic in the code for dynamically increasing
> the max downtime value, which we could mirror here
>
> eg something like
>
>  live_migration_compression=on|off
>
>   - Whether to enable use of compression
>
>  live_migration_compression_cache_ratio=0.8
>
>   - The maximum size of the compression cache relative to
> the guest RAM size. Must be less than 1.0
>
>  live_migration_compression_cache_steps=10
>
>   - The number of steps to take to get from initial cache
> size to the maximum cache size
>
>  live_migration_compression_cache_delay=75
>
>   - The time delay in seconds between increases in cache
> size
>
>
> In the same way that we do with migration downtime, instead of
> increasing cache size linearly, we'd increase it in ever larger
> steps until we hit the maximum. So we'd start off fairly small
> a few MB, and monitoring the cache hit rates, we'd increase it
> periodically.  If the number of steps configured and time delay
> between steps are reasonably large, that would have the effect
> that most migrations would have a fairly small cache and would
> complete without needing much compression overhead.
>
> Doing this though, we still need a solution to the host OOM scenario
> problem. We can't simply check free RAM at start of migration and
> see if there's enough to spare for compression cache, as the schedular
> can spawn a new guest on the compute host at any time, pushing us into
> OOM. We really need some way to indicate that there is a (potentially
> very large) extra RAM overhead for the guest during migration.
>
> ie if live_migration_compression_cache_ratio is 0.8 and we have a
> 4 GB guest, we need to make sure the schedular knows that we are
> potentially going to be using 7.2 GB of memory during migration
>
>
[Shaohe Feng]
These suggestions sounds good.
Thank you, Daneil.

Do we need to consider this factor:
  Seems, XBZRLE compress is executed after bulk stage. During the bulk
stage,
  calculate an transfer rate. If the transfer rate bellow a certain
  threshold value, we can set a bigger cache size.




> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>

BR
Shaohe Feng
__
OpenStack 

[openstack-dev] [puppet] Config support for oslo.config.cfg.MultiStrOpt

2015-11-27 Thread Martin Mágr

Greetings,

  I've submitted patch to puppet-openstacklib [1] which adds provider 
for parsing INI files containing duplicated variables (a.k.a MultiStrOpt 
[2]). Such parameters are used for example to set 
service_providers/service_provider for Neutron LBaaSv2. There has been a 
thought raised, that the patch should rather be submitted to 
puppetlabs-inifile module instead. The reason I did not submitted the 
patch to inifile module is that IMHO duplicate variables are not in the 
INI file spec [3]. Thoughts?


Regards,
Martin


[1] https://review.openstack.org/#/c/234727/
[2] 
https://docs.openstack.org/developer/oslo.config/api/oslo.config.cfg.html#oslo.config.cfg.MultiStrOpt

[3] https://en.wikipedia.org/wiki/INI_file#Duplicate_names

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-27 Thread 少合冯
2015-11-27 19:49 GMT+08:00 Daniel P. Berrange :

> On Fri, Nov 27, 2015 at 07:37:50PM +0800, 少合冯 wrote:
> > 2015-11-27 2:19 GMT+08:00 Daniel P. Berrange :
> >
> > > On Thu, Nov 26, 2015 at 05:39:04PM +, Daniel P. Berrange wrote:
> > > > On Thu, Nov 26, 2015 at 11:55:31PM +0800, 少合冯 wrote:
> > > > > 3.  dynamically choose when to activate xbzrle compress for live
> > > migration.
> > > > >  This is the best.
> > > > >  xbzrle really wants to be used if the network is not able to
> keep
> > > up
> > > > > with the dirtying rate of the guest RAM.
> > > > >  But how do I check the coming migration fit this situation?
> > > >
> > > > FWIW, if we decide we want compression support in Nova, I think that
> > > > having the Nova libvirt driver dynamically decide when to use it is
> > > > the only viable approach. Unfortunately the way the QEMU support
> > > > is implemented makes it very hard to use, as QEMU forces you to
> decide
> > > > to use it upfront, at a time when you don't have any useful
> information
> > > > on which to make the decision :-(  To be useful IMHO, we really need
> > > > the ability to turn on compression on the fly for an existing active
> > > > migration process. ie, we'd start migration off and let it run and
> > > > only enable compression if we encounter problems with completion.
> > > > Sadly we can't do this with QEMU as it stands today :-(
> > > >
> > >
> > [Shaohe Feng]
> > Add more guys working on kernel/hypervisor in our loop.
> > Wonder whether there will be any good solutions to improve it in QEMU in
> > future.
> >
>
IMHO,  It is possible to enable XBZRLE  on the fly during for an existing
active
migration process.
Than need improvement in qemu.



> >
> > > > Oh and of course we still need to address the issue of RAM usage and
> > > > communicating that need with the scheduler in order to avoid OOM
> > > > scenarios due to large compression cache.
> > > >
> > > > I tend to feel that the QEMU compression code is currently broken by
> > > > design and needs rework in QEMU before it can be pratically used in
> > > > an autonomous fashion :-(
> > >
> > > Actually thinking about it, there's not really any significant
> > > difference between Option 1 and Option 3. In both cases we want
> > > a nova.conf setting live_migration_compression=on|off to control
> > > whether we want to *permit* use  of compression.
> > >
> > > The only real difference between 1 & 3 is whether migration has
> > > compression enabled always, or whether we turn it on part way
> > > though migration.
> > >
> > > So although option 3 is our desired approach (which we can't
> > > actually implement due to QEMU limitations), option 1 could
> > > be made fairly similar if we start off with a very small
> > > compression cache size which would have the effect of more or
> > > less disabling compression initially.
> > >
> > > We already have logic in the code for dynamically increasing
> > > the max downtime value, which we could mirror here
> > >
> > > eg something like
> > >
> > >  live_migration_compression=on|off
> > >
> > >   - Whether to enable use of compression
> > >
> > >  live_migration_compression_cache_ratio=0.8
> > >
> > >   - The maximum size of the compression cache relative to
> > > the guest RAM size. Must be less than 1.0
> > >
> > >  live_migration_compression_cache_steps=10
> > >
> > >   - The number of steps to take to get from initial cache
> > > size to the maximum cache size
> > >
> > >  live_migration_compression_cache_delay=75
> > >
> > >   - The time delay in seconds between increases in cache
> > > size
> > >
> > >
> > > In the same way that we do with migration downtime, instead of
> > > increasing cache size linearly, we'd increase it in ever larger
> > > steps until we hit the maximum. So we'd start off fairly small
> > > a few MB, and monitoring the cache hit rates, we'd increase it
> > > periodically.  If the number of steps configured and time delay
> > > between steps are reasonably large, that would have the effect
> > > that most migrations would have a fairly small cache and would
> > > complete without needing much compression overhead.
> > >
> > > Doing this though, we still need a solution to the host OOM scenario
> > > problem. We can't simply check free RAM at start of migration and
> > > see if there's enough to spare for compression cache, as the schedular
> > > can spawn a new guest on the compute host at any time, pushing us into
> > > OOM. We really need some way to indicate that there is a (potentially
> > > very large) extra RAM overhead for the guest during migration.
> > >
> > > ie if live_migration_compression_cache_ratio is 0.8 and we have a
> > > 4 GB guest, we need to make sure the schedular knows that we are
> > > potentially going to be using 7.2 GB of memory during migration
> > >
> > >
> > [Shaohe Feng]
> > These suggestions sounds good.
> > Thank you, Daneil.
> >
> > Do we need to 

[openstack-dev] [Fuel] Nominating Sergey Kulanov to core reviewers of fuel-main

2015-11-27 Thread Roman Vyalov
Hi all,
Sergey is doing great work and I hope our Fuel make system will become even
better.
Fuelers, please vote for Sergey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-27 Thread Daniel P. Berrange
On Fri, Nov 27, 2015 at 07:37:50PM +0800, 少合冯 wrote:
> 2015-11-27 2:19 GMT+08:00 Daniel P. Berrange :
> 
> > On Thu, Nov 26, 2015 at 05:39:04PM +, Daniel P. Berrange wrote:
> > > On Thu, Nov 26, 2015 at 11:55:31PM +0800, 少合冯 wrote:
> > > > 3.  dynamically choose when to activate xbzrle compress for live
> > migration.
> > > >  This is the best.
> > > >  xbzrle really wants to be used if the network is not able to keep
> > up
> > > > with the dirtying rate of the guest RAM.
> > > >  But how do I check the coming migration fit this situation?
> > >
> > > FWIW, if we decide we want compression support in Nova, I think that
> > > having the Nova libvirt driver dynamically decide when to use it is
> > > the only viable approach. Unfortunately the way the QEMU support
> > > is implemented makes it very hard to use, as QEMU forces you to decide
> > > to use it upfront, at a time when you don't have any useful information
> > > on which to make the decision :-(  To be useful IMHO, we really need
> > > the ability to turn on compression on the fly for an existing active
> > > migration process. ie, we'd start migration off and let it run and
> > > only enable compression if we encounter problems with completion.
> > > Sadly we can't do this with QEMU as it stands today :-(
> > >
> >
> [Shaohe Feng]
> Add more guys working on kernel/hypervisor in our loop.
> Wonder whether there will be any good solutions to improve it in QEMU in
> future.
> 
> 
> > > Oh and of course we still need to address the issue of RAM usage and
> > > communicating that need with the scheduler in order to avoid OOM
> > > scenarios due to large compression cache.
> > >
> > > I tend to feel that the QEMU compression code is currently broken by
> > > design and needs rework in QEMU before it can be pratically used in
> > > an autonomous fashion :-(
> >
> > Actually thinking about it, there's not really any significant
> > difference between Option 1 and Option 3. In both cases we want
> > a nova.conf setting live_migration_compression=on|off to control
> > whether we want to *permit* use  of compression.
> >
> > The only real difference between 1 & 3 is whether migration has
> > compression enabled always, or whether we turn it on part way
> > though migration.
> >
> > So although option 3 is our desired approach (which we can't
> > actually implement due to QEMU limitations), option 1 could
> > be made fairly similar if we start off with a very small
> > compression cache size which would have the effect of more or
> > less disabling compression initially.
> >
> > We already have logic in the code for dynamically increasing
> > the max downtime value, which we could mirror here
> >
> > eg something like
> >
> >  live_migration_compression=on|off
> >
> >   - Whether to enable use of compression
> >
> >  live_migration_compression_cache_ratio=0.8
> >
> >   - The maximum size of the compression cache relative to
> > the guest RAM size. Must be less than 1.0
> >
> >  live_migration_compression_cache_steps=10
> >
> >   - The number of steps to take to get from initial cache
> > size to the maximum cache size
> >
> >  live_migration_compression_cache_delay=75
> >
> >   - The time delay in seconds between increases in cache
> > size
> >
> >
> > In the same way that we do with migration downtime, instead of
> > increasing cache size linearly, we'd increase it in ever larger
> > steps until we hit the maximum. So we'd start off fairly small
> > a few MB, and monitoring the cache hit rates, we'd increase it
> > periodically.  If the number of steps configured and time delay
> > between steps are reasonably large, that would have the effect
> > that most migrations would have a fairly small cache and would
> > complete without needing much compression overhead.
> >
> > Doing this though, we still need a solution to the host OOM scenario
> > problem. We can't simply check free RAM at start of migration and
> > see if there's enough to spare for compression cache, as the schedular
> > can spawn a new guest on the compute host at any time, pushing us into
> > OOM. We really need some way to indicate that there is a (potentially
> > very large) extra RAM overhead for the guest during migration.
> >
> > ie if live_migration_compression_cache_ratio is 0.8 and we have a
> > 4 GB guest, we need to make sure the schedular knows that we are
> > potentially going to be using 7.2 GB of memory during migration
> >
> >
> [Shaohe Feng]
> These suggestions sounds good.
> Thank you, Daneil.
> 
> Do we need to consider this factor:
>   Seems, XBZRLE compress is executed after bulk stage. During the bulk
> stage,
>   calculate an transfer rate. If the transfer rate bellow a certain
>   threshold value, we can set a bigger cache size.

I think it is probably sufficient to just look at the xbzrle cache
hit rates every "live_migration_compression_cache_delay" seconds
and decide how to tune the cache size based on that.



Re: [openstack-dev] [Murano] 'NoMatchingFunctionException: No function "#operator_." matches supplied arguments' error when adding an application to an environment

2015-11-27 Thread Stan Lagun
Here is the full story:

real YAML that is generated is stored in murano database in packages table.
Murano Dashboard obtains form definitions in YAML from API. But to improve
performance it also caches them locally. And when it does it stores them in
Python pickle format [1] rather then original YAML, but doesn't change the
extension (actually this is a bug). That's why when you take yamls from
dashboard cache they doesn't look like a valid YAML though they contain the
same information and can be decrypted back (that's what I did to find the
error you had). But is is much easier to take real YAML from database though

[1]: https://docs.python.org/2/library/pickle.html

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis



On Fri, Nov 27, 2015 at 1:25 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Thanks Stan for the pointer.
>
> I removed the line that referred to the 'name' property and now my
> application is added to the environment without any errors.
> However, what I see in ui.yaml still doesn't look like YAML.
>
> I'm attaching samples again.
>
>
> Even for HOT packages the content is not YAML.
>
> Regards,
> --Vahid
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-27 Thread Koniszewski, Pawel
> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: Friday, November 27, 2015 12:50 PM
> To: 少合冯
> Cc: Feng, Shaohe; OpenStack Development Mailing List (not for usage
> questions); Xiao, Guangrong; Ding, Jian-feng; Dong, Eddie; Wang, Yong Y; Jin,
> Yuntong
> Subject: Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for
> live migration
> 
> On Fri, Nov 27, 2015 at 07:37:50PM +0800, 少合冯 wrote:
> > 2015-11-27 2:19 GMT+08:00 Daniel P. Berrange :
> >
> > > On Thu, Nov 26, 2015 at 05:39:04PM +, Daniel P. Berrange wrote:
> > > > On Thu, Nov 26, 2015 at 11:55:31PM +0800, 少合冯 wrote:
> > > > > 3.  dynamically choose when to activate xbzrle compress for live
> > > migration.
> > > > >  This is the best.
> > > > >  xbzrle really wants to be used if the network is not able
> > > > > to keep
> > > up
> > > > > with the dirtying rate of the guest RAM.
> > > > >  But how do I check the coming migration fit this situation?
> > > >
> > > > FWIW, if we decide we want compression support in Nova, I think
> > > > that having the Nova libvirt driver dynamically decide when to use
> > > > it is the only viable approach. Unfortunately the way the QEMU
> > > > support is implemented makes it very hard to use, as QEMU forces
> > > > you to decide to use it upfront, at a time when you don't have any
> > > > useful information on which to make the decision :-(  To be useful
> > > > IMHO, we really need the ability to turn on compression on the fly
> > > > for an existing active migration process. ie, we'd start migration
> > > > off and let it run and only enable compression if we encounter
> problems with completion.
> > > > Sadly we can't do this with QEMU as it stands today :-(
> > > >
> > >
> > [Shaohe Feng]
> > Add more guys working on kernel/hypervisor in our loop.
> > Wonder whether there will be any good solutions to improve it in QEMU
> > in future.
> >
> >
> > > > Oh and of course we still need to address the issue of RAM usage
> > > > and communicating that need with the scheduler in order to avoid
> > > > OOM scenarios due to large compression cache.
> > > >
> > > > I tend to feel that the QEMU compression code is currently broken
> > > > by design and needs rework in QEMU before it can be pratically
> > > > used in an autonomous fashion :-(
> > >
> > > Actually thinking about it, there's not really any significant
> > > difference between Option 1 and Option 3. In both cases we want a
> > > nova.conf setting live_migration_compression=on|off to control
> > > whether we want to *permit* use  of compression.
> > >
> > > The only real difference between 1 & 3 is whether migration has
> > > compression enabled always, or whether we turn it on part way though
> > > migration.
> > >
> > > So although option 3 is our desired approach (which we can't
> > > actually implement due to QEMU limitations), option 1 could be made
> > > fairly similar if we start off with a very small compression cache
> > > size which would have the effect of more or less disabling
> > > compression initially.
> > >
> > > We already have logic in the code for dynamically increasing the max
> > > downtime value, which we could mirror here
> > >
> > > eg something like
> > >
> > >  live_migration_compression=on|off
> > >
> > >   - Whether to enable use of compression
> > >
> > >  live_migration_compression_cache_ratio=0.8
> > >
> > >   - The maximum size of the compression cache relative to
> > > the guest RAM size. Must be less than 1.0
> > >
> > >  live_migration_compression_cache_steps=10
> > >
> > >   - The number of steps to take to get from initial cache
> > > size to the maximum cache size
> > >
> > >  live_migration_compression_cache_delay=75
> > >
> > >   - The time delay in seconds between increases in cache
> > > size
> > >
> > >
> > > In the same way that we do with migration downtime, instead of
> > > increasing cache size linearly, we'd increase it in ever larger
> > > steps until we hit the maximum. So we'd start off fairly small a few
> > > MB, and monitoring the cache hit rates, we'd increase it
> > > periodically.  If the number of steps configured and time delay
> > > between steps are reasonably large, that would have the effect that
> > > most migrations would have a fairly small cache and would complete
> > > without needing much compression overhead.
> > >
> > > Doing this though, we still need a solution to the host OOM scenario
> > > problem. We can't simply check free RAM at start of migration and
> > > see if there's enough to spare for compression cache, as the
> > > schedular can spawn a new guest on the compute host at any time,
> > > pushing us into OOM. We really need some way to indicate that there
> > > is a (potentially very large) extra RAM overhead for the guest during
> migration.

What about CPU? We might end up with live migration that degrades performance 
of other VMs on source and/or destination node. AFAIK 

[openstack-dev] [Fuel] Nominating Dmitry Burmistrov to core reviewers of fuel-mirror

2015-11-27 Thread Roman Vyalov
Hi all,
Dmitry is doing great work and I hope our Perestroika build system will
become even better.
At the moment Dmitry is core developer in our Perestroika builds system.
But he not core reviewer in gerrit repository.
Fuelers, please vote for Dmitry
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Sergey Kulanov to core reviewers of fuel-main

2015-11-27 Thread Dmitry Burmistrov
Agree
+1

On Fri, Nov 27, 2015 at 3:12 PM, Roman Vyalov  wrote:

> Hi all,
> Sergey is doing great work and I hope our Fuel make system will become
> even better.
> Fuelers, please vote for Sergey
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Dmitry Burmistrov to core reviewers of fuel-mirror

2015-11-27 Thread Sergey Kulanov
Hi,

My +1 for Dima

2015-11-27 14:19 GMT+02:00 Roman Vyalov :

> Hi all,
> Dmitry is doing great work and I hope our Perestroika build system will
> become even better.
> At the moment Dmitry is core developer in our Perestroika builds system.
> But he not core reviewer in gerrit repository.
> Fuelers, please vote for Dmitry
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sergey
DevOps Engineer
IRC: SergK
Skype: Sergey_kul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-27 Thread Koniszewski, Pawel
> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: Friday, November 27, 2015 1:24 PM
> To: Koniszewski, Pawel
> Cc: OpenStack Development Mailing List (not for usage questions); ???; Feng,
> Shaohe; Xiao, Guangrong; Ding, Jian-feng; Dong, Eddie; Wang, Yong Y; Jin,
> Yuntong
> Subject: Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for
> live migration
>
> On Fri, Nov 27, 2015 at 12:17:06PM +, Koniszewski, Pawel wrote:
> > > -Original Message-
> > > > > Doing this though, we still need a solution to the host OOM
> > > > > scenario problem. We can't simply check free RAM at start of
> > > > > migration and see if there's enough to spare for compression
> > > > > cache, as the schedular can spawn a new guest on the compute
> > > > > host at any time, pushing us into OOM. We really need some way
> > > > > to indicate that there is a (potentially very large) extra RAM
> > > > > overhead for the guest during
> > > migration.
> >
> > What about CPU? We might end up with live migration that degrades
> > performance of other VMs on source and/or destination node. AFAIK CPUs
> > are heavily oversubscribed in many cases and this does not help.
> > I'm not sure that this thing fits into Nova as it requires resource
> > monitoring.
>
> Nova already has the ability to set CPU usage tuning rules against each VM.
> Since the CPU overhead is attributed to the QEMU process, these existing
> tuning rules will apply. So there would only be impact on other VMs, if you
> do
> not have any CPU tuning rules set in Nova.

Not sure I understand it correctly, I assume that you are talking about CPU
pinning. Does it mean that compression/decompression runs as part of VM
threads?

If not then, well, it will require all VMs to be pinned on both hosts, source
and destination (and in the whole cluster because of static configuration...).
Also what about operating system performance? Will QEMU distinct OS processes
somehow and won't affect them?

Also, nova can reserve some memory for the host. Will QEMU also respect it?

I like all this stuff about XBZRLE, but in my understanding it is very
resource-sensitive feature.

Kind Regards,
Pawel Koniszewski


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TripleO client answers file.

2015-11-27 Thread Lennart Regebro
On Thu, Nov 26, 2015 at 2:01 PM, Qasim Sarfraz  wrote:
> +1. That would be really helpful.
>
> What about passing other deployment parameters via answers.yaml ?  For
> example, compute-flavor, control-flavor etc

The overcloud update command doesn't take those parameters, however
most of those will change heat parameters, so you can put them in an
environment file and include that in the answers file. That way the
parameters will be preserved if you do a overcloud update.

//Lennart

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-27 Thread 金运通
I think it'd be necessary to set live_migration_compression=on|off dynamic
according to memory and cpu on host at the beginning of compression
migration, consider about the case there are 50 VMs on host and operator
want to migration them all to maintain/shutdown the host, having
compression=on|off  dynamically will avoid host OOM, and also with this, we
can even consider to left scheduler out (aka, not alert scheduler about
memory/cpu consume of compression).


BR,
YunTongJin

2015-11-27 21:58 GMT+08:00 Daniel P. Berrange :

> On Fri, Nov 27, 2015 at 01:01:15PM +, Koniszewski, Pawel wrote:
> > > -Original Message-
> > > From: Daniel P. Berrange [mailto:berra...@redhat.com]
> > > Sent: Friday, November 27, 2015 1:24 PM
> > > To: Koniszewski, Pawel
> > > Cc: OpenStack Development Mailing List (not for usage questions); ???;
> Feng,
> > > Shaohe; Xiao, Guangrong; Ding, Jian-feng; Dong, Eddie; Wang, Yong Y;
> Jin,
> > > Yuntong
> > > Subject: Re: [openstack-dev] [nova] [RFC] how to enable xbzrle
> compress for
> > > live migration
> > >
> > > On Fri, Nov 27, 2015 at 12:17:06PM +, Koniszewski, Pawel wrote:
> > > > > -Original Message-
> > > > > > > Doing this though, we still need a solution to the host OOM
> > > > > > > scenario problem. We can't simply check free RAM at start of
> > > > > > > migration and see if there's enough to spare for compression
> > > > > > > cache, as the schedular can spawn a new guest on the compute
> > > > > > > host at any time, pushing us into OOM. We really need some way
> > > > > > > to indicate that there is a (potentially very large) extra RAM
> > > > > > > overhead for the guest during
> > > > > migration.
> > > >
> > > > What about CPU? We might end up with live migration that degrades
> > > > performance of other VMs on source and/or destination node. AFAIK
> CPUs
> > > > are heavily oversubscribed in many cases and this does not help.
> > > > I'm not sure that this thing fits into Nova as it requires resource
> > > > monitoring.
> > >
> > > Nova already has the ability to set CPU usage tuning rules against
> each VM.
> > > Since the CPU overhead is attributed to the QEMU process, these
> existing
> > > tuning rules will apply. So there would only be impact on other VMs,
> if you
> > > do
> > > not have any CPU tuning rules set in Nova.
> >
> > Not sure I understand it correctly, I assume that you are talking about
> CPU
> > pinning. Does it mean that compression/decompression runs as part of VM
> > threads?
> >
> > If not then, well, it will require all VMs to be pinned on both hosts,
> source
> > and destination (and in the whole cluster because of static
> configuration...).
> > Also what about operating system performance? Will QEMU distinct OS
> processes
> > somehow and won't affect them?
>
> The compression runs in the migration thread of QEMU. This is not a vCPU
> thread, but one of the QEMU emulator threads. So CPU usage policy set
> against the QEMU emulator threads applies to the compression CPU overhead.
>
> > Also, nova can reserve some memory for the host. Will QEMU also respect
> it?
>
> No, its not QEMU's job to respect that. If you want to reserve resources
> for only the host OS, then you need to setup suitable cgroup partitions
> to separate VM from non-VM processes. The Nova reserved memory setting
> is merely a hint to the schedular - it has no functional effect on its
> own.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Dmitry Burmistrov to core reviewers of fuel-mirror

2015-11-27 Thread Vitaly Parakhin
+1

пятница, 27 ноября 2015 г. пользователь Roman Vyalov написал:

> Hi all,
> Dmitry is doing great work and I hope our Perestroika build system will
> become even better.
> At the moment Dmitry is core developer in our Perestroika builds system.
> But he not core reviewer in gerrit repository.
> Fuelers, please vote for Dmitry
>


-- 
Regards,
Vitaly Parakhin.
CI Engineer | Mirantis, Inc. | http://www.mirantis.com
IRC: brain461 @ chat.freenode.net | Slack: vparakhin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] preparing your mitaka-1 milestone tag

2015-11-27 Thread Doug Hellmann
Next week (Dec 1-3) is the Mitaka 1 milestone deadline. Release
liaisons for all managed projects using the cycle-with-milestones
release model will need to propose tags for their repositories by
Thursday. Tag requests submitted after Dec 3 will be rejected.

As a one-time change, we are also going to simplify how we specify
the versions for projects by moving to only using tags, and removing
the version entry from setup.cfg. As with most of the other changes
we are making this cycle, switching to only using tags for versioning
will simplify some of the automation and release management processes.

Because of the way pbr calculates development version numbers, we
need to be careful to tag the new milestone before removing the
version entry to avoid having our versions decrease on master (for
example, from something in the 12.0.0 series to something in the
11.0.0 series), which would disrupt users deploying from trunk
automatically.

Here are the steps we need to follow, for each project to tag the
milestone and safely remove the version entry:

1. Complete the reno integration so that release notes are building
   correctly, and add any release notes for work done up to this
   point.  Changes to project-config should be submitted and changes
   to add reno to each repository should be landed.

2. Prepare a patch to the deliverable file in the openstack/releases
   repository adding a *beta 1* tag for the upcoming release,
   selecting an appropriate SHA close to the tip of the master
   branch.

   For example, a project with version number 8.0.0 in setup.cfg
   right now should propose a tag 8.0.0.0b1 for this milestone.

   The SHA should refer to a patch merged *after* all commits
   containing release notes intended for the milestone to ensure the
   notes are picked up in the right version.

3. Prepare a patch to the project repository removing the version
   line from setup.cfg.

   Set the patch to depend on the release patch from step 1, and
   use the topic "remove-version-from-setup".

4. Add a comment to the milestone tag request linking to the
   review from step 3.

We will wait to tag the milestone for a project until the reno
integration is complete and until the tag request includes a link
to a patch removing the version entry. Again, late submissions will
be rejected.

After your milestone is tagged, the patches to remove the version
entry from setup.cfg should be given high priority for reviews and
merged as quickly as possible.

Projects following the cycle-with-intermediary release model will need
to complete these steps around the time of their next release, but if
there is no release planned for the milestone week the work can wait.

As always, let me know if you have questions.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Call for review focus

2015-11-27 Thread Miguel Angel Ajo



Assaf Muller wrote:

On Mon, Nov 23, 2015 at 7:02 AM, Rossella Sblendido  wrote:

On 11/20/2015 03:54 AM, Armando M. wrote:


On 19 November 2015 at 18:26, Assaf Muller>  wrote:

 On Wed, Nov 18, 2015 at 9:14 PM, Armando M.>  wrote:
 >  Hi Neutrites,
 >
 >  We are nearly two weeks away from the end of Mitaka 1.
 >
 >  I am writing this email to invite you to be mindful to what you
review,
 >  especially in the next couple of weeks. Whenever you have the time
to review
 >  code, please consider giving priority to the following:
 >
 >  Patches that target blueprints targeted for Mitaka;
 >  Patches that target bugs that are either critical or high;
 >  Patches that target rfe-approved 'bugs';
 >  Patches that target specs that have followed the most current
submission
 >  process;

 Is it possible to create Gerrit dashboards for patches that answer
these
 criteria, and then persist the links in Neutron's dashboards devref
 page?
 http://docs.openstack.org/developer/neutron/dashboards/index.html
 That'd be super useful.


We should look into that, but to be perfectly honest I am not sure how
easy it would be, since we'd need to cross-reference content that lives
into gerrit as well as launchpad. Would that even be possible?

To cross-reference we can use the bug ID or the blueprint name.

I created a script that queries launchpad to get:
1) Bug number of the bugs tagged with approved-rfe
2) Bug number of the critical/high bugs
3) list of blueprints targeted for the current milestone (mitaka-1)

With this info the script builds a .dash file that can be used by
gerrit-dash-creator [2] to produce a dashboard url .

The script prints also the queries that can be used in gerrit UI directly,
e.g.:
Critical/High Bugs
(topic:bug/1399249 OR topic:bug/1399280 OR topic:bug/1443421 OR
topic:bug/1453350 OR topic:bug/1462154 OR topic:bug/1478100 OR
topic:bug/1490051 OR topic:bug/1491131 OR topic:bug/1498790 OR
topic:bug/1505575 OR topic:bug/1505843 OR topic:bug/1513678 OR
topic:bug/1513765 OR topic:bug/1514810)


This is the dashboard I get right now [3]

I tried in many ways to get Gerrit to filter patches if the commit message
contains a bug ID. Something like:

(message:"#1399249" OR message:"#1399280" OR message:"#1443421" OR
message:"#1453350" OR message:"#1462154" OR message:"#1478100" OR
message:"#1490051" OR message:"#1491131" OR message:"#1498790" OR
message:"#1505575" OR message:"#1505843" OR message:"#1513678" OR
message:"#1513765" OR message:"#1514810")

but it doesn't work well, the result of the filter contains patches that
have nothing to do with the bugs queried.
That's why I had to filter using the topic.

CAVEAT: To make the dashboard work, bug fixes must use the topic "bug/ID"
and patches implementing a blueprint the topic "bp/name". If a patch is not
following this convention it won't be showed in the dashboard, since the
topic is used as filter. Most of us use this convention already anyway so I
hope it's not too much of a burden.

Feedback is appreciated :)


Rossella this is exactly what I wanted :) Let's iterate on the patch
and merge it.
We could then consider running the script automatically on a daily
basis and publishing the resulting URL in a nice bookmarkable place.


Just the thought I had. The dashboard generator is being very helpful to 
me :), but it's

something that could be automated.




[1] https://review.openstack.org/248645
[2] https://github.com/openstack/gerrit-dash-creator
[3] https://goo.gl/sglSbp


Btw, I was looking at the current blueprint assignments [1] for Mitaka:
there are some blueprints that still need assignee, approver and
drafter; we should close the gap. If there are volunteers, please reach
out to me.

Thanks,
Armando

[1] https://blueprints.launchpad.net/neutron/mitaka/+assignments


 >
 >  Everything else should come later, no matter how easy or interesting
it is
 >  to review; remember that as a community we have the collective duty
to work
 >  towards a common (set of) target(s), as being planned in
collaboration with
 >  the Neutron Drivers team and the larger core team.
 >
 >  I would invite submitters to ensure that the Launchpad resources
 >  (blueprints, and bug report) capture the most updated view in terms
of
 >  patches etc. Work with your approver to help him/her be focussed
where it
 >  matters most.
 >
 >  Finally, we had plenty of discussions at the design summit, and some
of
 >  those discussions will have to be followed up with actions (aka code
in
 >  OpenStack lingo). Even though, we no longer have deadlines for
feature
 >  submission, I strongly advise you not to leave it last minute. We
can only
 >  handle so much work for any given release, and past experience tells
us that
 >  we 

Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-27 Thread Sergii Golovatiuk
Hi,

On Wed, Nov 25, 2015 at 9:43 PM, Andrew Woodward  wrote:

> 
> IMO, removing the docker containers is a mistake v.s. fixing them and
> using them properly. They provide an isolation that is necessary (and that
> we mangle) to make services portable and scaleable. We really should sit
> down and document how we really want all of the services to interact before
> we rip the containers out.
>
>
If we are talking about dependencies then all components should have global
requirements. It should be done in the same way as all other OpenStack
projects.

If we are talking about component-component communication, we should
introduce API and Bus communication to have components loosely coupled
components.

I agree, the way we use containers now still is quite wrong, and brings us
> some negative value, but I'm not sold on stripping them out now just
> because they no longer bring the same upgrades value as before.
> 
>
> My opinion aside, we are rushing into this far to late in the feature
> cycle. Prior to moving forward with this, we need a good QA plan, the spec
> is quite light on that and must receive review and approval from QA. This
> needs to include an actual testing plan.
>

+100.


> From the implementation side, we are pushing up against the FF deadline.
> We need to document what our time objectives are for this and when we will
> no longer consider this for 8.0.
>
> Lastly, for those that are +1 on the thread here, please review and
> comment on the spec, It's received almost no attention for something with
> such a large impact.
>
> On Tue, Nov 24, 2015 at 4:58 PM Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> The status is as follows:
>>
>> 1) Fuel-main [1] and fuel-library [2] patches can deploy the master node
>> w/o docker containers
>> 2) I've not built experimental ISO yet (have been testing and debugging
>> manually)
>> 3) There are still some flaws (need better formatting, etc.)
>> 4) Plan for tomorrow is to build experimental ISO and to begin fixing
>> system tests and fix the spec.
>>
>> [1] https://review.openstack.org/#/c/248649
>> [2] https://review.openstack.org/#/c/248650
>>
>> Vladimir Kozhukalov
>>
>> On Mon, Nov 23, 2015 at 7:51 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Colleagues,
>>>
>>> I've started working on the change. Here are two patches (fuel-main [1]
>>> and fuel-library [2]). They are not ready to review (still does not work
>>> and under active development). Changes are not going to be huge. Here is a
>>> spec [3]. Will keep the status up to date in this ML thread.
>>>
>>>
>>> [1] https://review.openstack.org/#/c/248649
>>> [2] https://review.openstack.org/#/c/248650
>>> [3] https://review.openstack.org/#/c/248814
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Mon, Nov 23, 2015 at 3:35 PM, Aleksandr Maretskiy <
>>> amarets...@mirantis.com> wrote:
>>>


 On Mon, Nov 23, 2015 at 2:27 PM, Bogdan Dobrelya <
 bdobre...@mirantis.com> wrote:

> On 23.11.2015 12:47, Aleksandr Maretskiy wrote:
> > Hi all,
> >
> > as you know, Rally runs inside docker on Fuel master node, so docker
> > removal (good improvement) is a problem for Rally users.
> >
> > To solve this, I'm planning to make native Rally installation on Fuel
> > master node that is running on CentOS 7,
> > and then make a step-by-step instruction how to make this
> installation.
> >
> > So I hope docker removal will not make issues for Rally users.
>
> I believe the most backwards compatible scenario is to keep the docker
> installed while removing the fuel-* docker things back to the host OS.
> So nothing would prevent user from pulling and running whichever docker
> containers he wants to put on the Fuel master node. Makes sense?
>
>
 Sounds good


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] Fwd: [Senlin]Support more complicatedscalingscenario

2015-11-27 Thread Qiming Teng
On Wed, Nov 25, 2015 at 03:58:35PM +0800, Jun Xu wrote:
> 
> If a cluster attached two scaling_in policies(policy1 with priority
> 20 and policy2 with priority 40),
> when  policy_check function is called, it will do policy1 checking
> first and then check policy2.
> If any policy failed, it will return with CHECK_ERROR.  Is this
> conform to your original design?

yes, any policy check failure would mean the action was not supposed to
be executed.

...
 
> >Each policy instances creates its own policy-binding on a cluster. The
> >cooldown is recorded and then checked there. I can sense something is
> >wrong, but so far I'm not quite sure I understand the specific use case
> >that the current logic fails to support.
> Fo following case:
> A cluster is attached with two policies as follow.
> policy1:  type=senlin.policy.scaling, cooldown=60s,   event:
> CLUSTER_SCALE_IN
> policy2:  type=senlin.policy.scaling, cooldown=300s, event:
> CLUSTER_SCALE_OUT
> 
> Then I do following actions
> 1) senlin cluster-scale-in -c 1  mycluster
> --  scale-in  ok
> 2) after 70s,  senlin cluster-scale-in -c 1  mycluster
> ---  scale-in failed, because of policy2 is
> still in cooldown.

This sounds more like a bug in policy checking. Please help raise a bug,
we can jump onto it later.
 
> Now 'cooldown' is a common property for any kind of policy, I think
> this property maybe not necessary for all kind of policy like
> LB_POLICY.

This is actually a good point. Will bring this to the team for a
discussion. Thanks.
 
> >This is a misconcept because a Senlin policy is not a Heat
> >ScalingPolicy.  A Senlin policy is checked before and/or after a
> >specified action is performed.
> 
> I got they are different, I want to know how combine these
> operations(e.g. webhook,
> ScalePolicy, cluster, ceilometer alarms) to realize the autoscaling
> functions like in Heat?
> Hu yanyan has given an combinatorial method,  but I think this
> method doesn't  resolve the case.

Emm, I think we need to provide a tutorial document in tree.

> I really want to discuss the cooldown checking for multiple polices
> of same type.
> Following is the different for autoscaling in Senlin and Heat.
> In Senlin:
> policy1:  type=senlin.policy.scaling, cooldown=60s,   event:
> CLUSTER_SCALE_IN
> policy2:  type=senlin.policy.scaling, cooldown=300s, event:
> CLUSTER_SCALE_IN
> webhook1: count=1, action=CLUSTER_SCALE_IN
> webhook2: count=2, action=CLUSTER_SCALE_IN
> 
>trigger webhook1, all policy1's cooldown and policy2's cooldown
> will be checked.
>trigger webhook2, all policy1's cooldown and policy2's coolddow
> will be checked.
> 
> 
> In Heat
>policy1: type=OS::Heat::ScalingPolicy, cooldown=60s,
> scaling_adjustment=1
>policy2: type=OS::Heat::ScalingPolicy, cooldown=300s,
> scaling_adjustment=2
>policy1 will return a webhook as webhook1.
>policy2 will return a webhook as webhook2.
> 
>trigger webhook1, only policy1's cooldown will be checked.
>trigger webhook1, only policy1's cooldown will be checked.
> 

Think about this, why do you have 'cooldown' property? It is mainly
designed to avoid thrashing behavior of a cluster/group, right? It
doesn't make a lot senses to me having each webhook specifying a
different 'cooldown' value. In other words, 'cooldown' could be set to
be a property of the cluster/group. The cooldown checking logic you
outlined above hit the point -- should we shield the cluster from any
scaling requests during a 'cooldown' phase? My answer would be yes.
It is the cluster you want to protect, not the policy or action.

With that, we will discuss whether it makes sense to make 'cooldown' a
cluster property instead of a policy property.

Thanks,
  Qiming 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-27 Thread Daniel P. Berrange
On Fri, Nov 27, 2015 at 12:17:06PM +, Koniszewski, Pawel wrote:
> > -Original Message-
> > > > Doing this though, we still need a solution to the host OOM scenario
> > > > problem. We can't simply check free RAM at start of migration and
> > > > see if there's enough to spare for compression cache, as the
> > > > schedular can spawn a new guest on the compute host at any time,
> > > > pushing us into OOM. We really need some way to indicate that there
> > > > is a (potentially very large) extra RAM overhead for the guest during
> > migration.
> 
> What about CPU? We might end up with live migration that degrades
> performance of other VMs on source and/or destination node. AFAIK
> CPUs are heavily oversubscribed in many cases and this does not help.
> I'm not sure that this thing fits into Nova as it requires resource
> monitoring.

Nova already has the ability to set CPU usage tuning rules against
each VM. Since the CPU overhead is attributed to the QEMU process,
these existing tuning rules will apply. So there would only be
impact on other VMs, if you do not have any CPU tuning rules set
in Nova.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-27 Thread Mooney, Sean K
For kilo we provided a single node all in one example config here
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/_downloads/local.conf_example

I have modified that to be a controller with the interfaces and ips form your 
controller local.conf.
I do not have any kilo compute local.conf to hand but I modified an old compute 
local.conf to so that it should work
Using the ip and interface settings from your compute local.conf.


Regards
Sean.

From: Praveen MANKARA RADHAKRISHNAN [mailto:praveen.mank...@6wind.com]
Sent: Friday, November 27, 2015 9:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi Sean,

I have changed the hostname in both machines.
and tried again still i have the same error.

I am trying to configure ovs-dpdk with vlan now.
For the kilo version the getting started guide was missing in the repository.
But i have changed the repositories everywhere to kilo.

Please find the attached loal.conf for compute and controller.

one change i have made is i have added ml2 plusgin as vlan for compute config 
also.
because if i exactly use the local.confs as in example the controller was vlan 
and compute is taking as vxlan for the ml2 config.

And please find all the errors present in the compute and controller.

Thanks
Praveen

On Thu, Nov 26, 2015 at 5:58 PM, Mooney, Sean K 
> wrote:
Openstack uses the hostname as a primary key in many of the project.
Nova and neutron both do this.
If you had two nodes with the same host name then it would cause undefined 
behavior.

Based on the error Andreas highlighted  are you currently trying to configure 
ovs-dpdk with vxlan/gre?

I also noticed that the getting started guide you linked to earlier was for the 
master branch(mitaka) but
You mentioned you were deploying kilo.
The local.conf settings will be different in both case.





-Original Message-
From: Andreas Scheuring 
[mailto:scheu...@linux.vnet.ibm.com]
Sent: Thursday, November 26, 2015 1:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Praveen,
there are many error in your q-svc log.
It says:

InvalidInput: Invalid input for operation: (u'Tunnel IP %(ip)s in use with host 
%(host)s', {'ip': u'10.81.1.150', 'host':
u'localhost.localdomain'}).\n"]


Did you maybe specify duplicated ips in your controllers and compute nodes 
neutron tunnel config?

Or did you change the hostname after installation

Or maybe the code has trouble with duplicated host names?

--
Andreas
(IRC: scheuran)



On Di, 2015-11-24 at 15:28 +0100, Praveen MANKARA RADHAKRISHNAN wrote:
> Hi Sean,
>
>
> Thanks for the reply.
>
>
> Please find the logs attached.
> ovs-dpdk is correctly running in compute.
>
>
> Thanks
> Praveen
>
> On Tue, Nov 24, 2015 at 3:04 PM, Mooney, Sean K
> > wrote:
> Hi would you be able to attach the
>
> n-cpu log form the computenode  and  the
>
> n-sch and q-svc logs for the controller so we can see if there
> is a stack trace relating to the
>
> vm boot.
>
>
>
> Also can you confirm ovs-dpdk is running correctly on the
> compute node by running
>
> sudo service ovs-dpdk status
>
>
>
> the neutron and networking-ovs-dpdk commits are from their
> respective stable/kilo branches so they should be compatible
>
> provided no breaking changes have been merged to either
> branch.
>
>
>
> regards
>
> sean.
>
>
>
> From: Praveen MANKARA RADHAKRISHNAN
> [mailto:praveen.mank...@6wind.com]
> Sent: Tuesday, November 24, 2015 1:39 PM
> To: OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation
> fails with Unexpected vif_type=binding_failed
>
>
>
> Hi Przemek,
>
>
>
>
> Thanks For the response,
>
>
>
>
>
> Here are the commit ids for Neutron and networking-ovs-dpdk
>
>
>
>
>
> [stack@localhost neutron]$ git log --format="%H" -n 1
>
>
> 026bfc6421da796075f71a9ad4378674f619193d
>
>
> [stack@localhost neutron]$ cd ..
>
>
> [stack@localhost ~]$ cd networking-ovs-dpdk/
>
>
> [stack@localhost networking-ovs-dpdk]$  git log --format="%H"
> -n 1
>
>
> 90dd03a76a7e30cf76ecc657f23be8371b1181d2
>
>
>
>
>
> The Neutron agents are up and running in compute node.
>
>
>
>
>
> Thanks
>
>
> Praveen
>
>
>
>
>
>
>
> On Tue, Nov 24, 2015 at 12:57 PM, Czesnowicz, Przemyslaw
> 
> 

Re: [openstack-dev] [horizon] Supporting Django 1.9

2015-11-27 Thread Thomas Goirand
On 11/27/2015 11:18 AM, Rob Cresswell (rcresswe) wrote:
> Mitaka will support 1.9. I’m already working on it :)
> Liberty is >= 1.7 and < 1.9, so shouldn’t matter.
> 
> Rob

It does mater for me, at least until the final release of Mitaka. Could
you please make sure that all of these Django 1.9 patches are easily
identifiable, so that I can later on backport them (even if it never
reaches the upstream stable/liberty branch)? That would help me a lot.
I'm not sure how to list them all after the facts though. Maybe opening
a wiki page? Any suggestion?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-27 Thread Daniel P. Berrange
On Fri, Nov 27, 2015 at 01:01:15PM +, Koniszewski, Pawel wrote:
> > -Original Message-
> > From: Daniel P. Berrange [mailto:berra...@redhat.com]
> > Sent: Friday, November 27, 2015 1:24 PM
> > To: Koniszewski, Pawel
> > Cc: OpenStack Development Mailing List (not for usage questions); ???; Feng,
> > Shaohe; Xiao, Guangrong; Ding, Jian-feng; Dong, Eddie; Wang, Yong Y; Jin,
> > Yuntong
> > Subject: Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for
> > live migration
> >
> > On Fri, Nov 27, 2015 at 12:17:06PM +, Koniszewski, Pawel wrote:
> > > > -Original Message-
> > > > > > Doing this though, we still need a solution to the host OOM
> > > > > > scenario problem. We can't simply check free RAM at start of
> > > > > > migration and see if there's enough to spare for compression
> > > > > > cache, as the schedular can spawn a new guest on the compute
> > > > > > host at any time, pushing us into OOM. We really need some way
> > > > > > to indicate that there is a (potentially very large) extra RAM
> > > > > > overhead for the guest during
> > > > migration.
> > >
> > > What about CPU? We might end up with live migration that degrades
> > > performance of other VMs on source and/or destination node. AFAIK CPUs
> > > are heavily oversubscribed in many cases and this does not help.
> > > I'm not sure that this thing fits into Nova as it requires resource
> > > monitoring.
> >
> > Nova already has the ability to set CPU usage tuning rules against each VM.
> > Since the CPU overhead is attributed to the QEMU process, these existing
> > tuning rules will apply. So there would only be impact on other VMs, if you
> > do
> > not have any CPU tuning rules set in Nova.
> 
> Not sure I understand it correctly, I assume that you are talking about CPU
> pinning. Does it mean that compression/decompression runs as part of VM
> threads?
> 
> If not then, well, it will require all VMs to be pinned on both hosts, source
> and destination (and in the whole cluster because of static configuration...).
> Also what about operating system performance? Will QEMU distinct OS processes
> somehow and won't affect them?

The compression runs in the migration thread of QEMU. This is not a vCPU
thread, but one of the QEMU emulator threads. So CPU usage policy set
against the QEMU emulator threads applies to the compression CPU overhead.

> Also, nova can reserve some memory for the host. Will QEMU also respect it?

No, its not QEMU's job to respect that. If you want to reserve resources
for only the host OS, then you need to setup suitable cgroup partitions
to separate VM from non-VM processes. The Nova reserved memory setting
is merely a hint to the schedular - it has no functional effect on its
own.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Ansible] Building a dev env with AIO

2015-11-27 Thread Major Hayden
On Fri, 2015-11-27 at 09:21 -0800, Anthony Chow wrote:
> I have a Ubuntu desktop with 8GB of ram and is using vagrant to start
> a 14.04 VM so I can play around before setting the environment on the
> desktop.
> 
> Over the last few days I have followed the Step-by-Step guide and
> failed 3 times.  The last 2 times I failed in setting up the galera
> cluster.

Hello Anthony,

My guess would be that your VM doesn't have enough RAM allocated to it
for the AIO build.  It's recommended[1] to have 16GB of RAM available
to the system if possible.  We do testing with 8GB VM's with a highly
specialized configuration that limits resource usage but there's not
enough RAM left over for building VM's.
[1] http://docs.openstack.org/developer/openstack-ansible/developer-
docs/quickstart-aio.html
--
Major Hayden


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposal for new core reviewer: ChangBo Guo

2015-11-27 Thread Joshua Harlow

+1 welcome to the team :)

Davanum Srinivas wrote:

+1 from me! would be great to have you on board gcb

-- Dims

On Fri, Nov 27, 2015 at 11:23 AM, Victor Stinner  wrote:

Hi,

I noticed that ChangBo Guo (aka gcb) is very active in various Oslo
projects, to write patches but also to review patches written by others. He
attend Oslo meetings, welcome reviews, etc. It's a pleasure to work with
him.

To accelerate the Oslo development, I propose to invite ChangBo to become an
Oslo core reviewer. I asked him privately and he already replied that it
would be a honor for him.

What do you think?

ChangBo Guo's open changes:

https://review.openstack.org/#/q/owner:glongw...@gmail.com+status:open,n,z

ChangBo Guo's merged changes:

https://review.openstack.org/#/q/owner:glongw...@gmail.com+status:merged,n,z

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]Do we have project scope for cinder?

2015-11-27 Thread hao wang
Hi guys,

I notice nova have a clarification of project scope:
http://docs.openstack.org/developer/nova/project_scope.html

I want to find cinder's, but failed,  do you know where to find it?

It's important to let developers know what feature should be
introduced into cinder and what shouldn't.

BR
Wang Hao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][fwaas] mitaka mid-cycle 1/12-1/15

2015-11-27 Thread Doug Wiegley
Hi all,

The LBaaS/Octavia/FWaaS mid-cycle will be in San Antonio this winter, from 
January 12-15.

Etherpad with details;

https://etherpad.openstack.org/p/lbaas-mitaka-midcycle

Please update if you are going to attend, so our host can get an accurate 
headcount. Hope to see you there!

Thanks,
doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova mid cycle details

2015-11-27 Thread Michael Still
Hey,

I filled in the first part of that page, but when it got to hotels I got
confused. The web site doesn't seem to mention the night rate for the HP
price. Do you know what that is?

Thanks,
Michael

On Sat, Nov 28, 2015 at 3:06 AM, Murray, Paul (HP Cloud) 
wrote:

> The Nova Mitaka mid cycle meetup is in Bristol, UK at the Hewlett Packard
> Enterprise office.
>
>
>
> The mid cycle wiki page is here:
>
> https://wiki.openstack.org/wiki/Sprints/NovaMitakaSprint
>
>
>
> Note that there is a web site for signing up for the event and booking
> hotel rooms at a reduced event rate here:
>
> https://starcite.smarteventscloud.com/hpe/NovaMidCycleMeeting
>
>
>
> If you want to book a room at the event rate you do need to register on
> that site.
>
>
>
> There is also an Eventbrite event that was created before the above web
> site was available. Do not worry if you have registered using Eventbrite,
> we will recognize those registrations as well. But if you do want to book a
> room you will need to register again on the above site.
>
>
>
> Paul
>
>
>
> Paul Murray
>
> Nova Technical Lead, HPE Cloud
>
> Hewlett Packard Enterprise
>
> +44 117 316 2527
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How were we going to remove soft delete again?

2015-11-27 Thread Matt Riedemann



On 11/26/2015 6:17 AM, John Garbutt wrote:

On 26 November 2015 at 12:10, John Garbutt  wrote:

On 24 November 2015 at 16:36, Matt Riedemann  wrote:

I know in Vancouver we talked about no more soft delete and in Mitaka lxsli
decoupled the nova models from the SoftDeleteMixin [1].

 From what I remember, the idea is to not add the deleted column to new
tables, to not expose soft deleted resources in the REST API in new ways,
and to eventually drop the deleted column from the models.

I bring up the REST API because I was tinkering with the idea of allowing
non-admins to list/show their (soft) deleted instances [2]. Doing that,
however, would expose more of the REST API to deleted resources which makes
it harder to remove from the data model.

My question is, how were we thinking we were going to remove the deleted
column from the data model in a backward compatible way? A new microversion
in the REST API isn't going to magically work if we drop the column in the
data model, since anything before that microversion should still work - like
listing deleted instances for the admin.

Am I forgetting something? There were a lot of ideas going around the room
during the session in Vancouver and I'd like to sort out the eventual
long-term plan so we can document it in the devref about policies so that
when ideas like [2] come up we can point to the policy and say 'no we aren't
going to do that and here's why'.

[1]
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/no-more-soft-delete.html
[2]
https://blueprints.launchpad.net/nova/+spec/non-admin-list-deleted-instances


 From my memory, step 1 is to ensure we don't keep adding soft delete
by default/accident, which is where the explicit mix-in should help.

Step 2, is removing existing soft_deletes. Now we can add a new
microversion to remove the concept of requesting deleted things, but
as you point out, that doesn't help the older microversions.

What we could raise 403 errors when users request deleted things in
older versions of the API. I don't like that breaking API change, but
I also don't like the idea of keeping soft_delete in the database for
ever. Its a the case of picking the best of two bad outcomes. I am not
sure we have reached consensus on the preferred approach yet.


I just realised, my text is ambiguous...

There is a difference between soft deleted instances, and soft delete in the DB.

If the instance could still be restored, and is not yet deleted, it
makes sense that policy could allow a non-admin to see those. But
thats a non-db-deleted instance in the SOFT_DELETED state.

I am still leaning towards killing the APIs that allow you to read in
DB soft-deleted data. Although, in some ways thats because the API
changes based on the DB retention policy of the deployer, which seems
very odd.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



What is the main reason again for removing soft deleted (deleted != 0) 
resources? Because of DB bloat? If that's the case, aren't operators 
pretty used to archiving/purging by now? Granted, the in-tree archive 
command is broken, and I'm working on fixing that, and we have a change 
up to add a purge command for (soft) deleted instances, but I'm trying 
to see if I'm forgetting something else here.


It's like soft delete is a Frankenstein and we're a mob out to kill it, 
but I forget every 6 months exactly why, and if it's totally necessary 
(and worth backward incompatible API changes). I think that's a question 
that operators and users should really answer rather than the nova dev team.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Using docker container to run COE daemons

2015-11-27 Thread Egor Guz
Jay,

"A/B testing" for PROD Infra sounds very cool ;) (we are doing it with business 
apps all the time, but stuck with canary, incremental rollout or blue-green (if 
we have enough capacity ;)) deployments for infra), do you mind share details 
how are you doing it? My concern is that you need at least to change container 
version and restart container/service, it sounds like typical configuration 
push.

I agree with Hongbin’s concerns about blindly moving everything in containers. 
Actually we are moving everything into containers for LAB/DEV environments 
because it allow us to test/play with different versions/configs, but it’s not 
the case for PROD because we try to avoid adding extra complexity (e.g. need to 
monitor Docker daemon itself). And building new image (current process) is 
pretty trivial these days.

Have you tested slave/agent inside container? I was under impression that it 
doesn’t work until somebody from Kolla team pointed me to the 
https://hub.docker.com/u/mesoscloud/.
Also I belive you can try your approach without any changes at existing 
template, because it’s just start services and adding configurations. So you 
can build image which has  the same services as Docker containers with
volumes mapped to config folders at host.

―
Egor

From: Jay Lau >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, November 26, 2015 at 16:02
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Using docker container to run COE daemons

One of the benefit of running daemons in docker container is that the cluster 
can upgrade more easily. Take mesos as an example, if I can make mesos running 
in container, then when update mesos slave with some hot fixes, I can upgrade 
the mesos slave to a new version in an gray upgrade, i.e. ABtest etc.

On Fri, Nov 27, 2015 at 12:01 AM, Hongbin Lu 
> wrote:
Jay,

Agree and disagree. Containerize some COE daemons will facilitate the version 
upgrade and maintenance. However, I don’t think it is correct to blindly 
containerize everything unless there is an investigation performed to 
understand the benefits and costs of doing that. Quoted from Egor, the common 
practice in k8s is to containerize everything except kublet, because it seems 
it is just too hard to containerize everything. In the case of mesos, I am not 
sure if it is a good idea to move everything to containers, given the fact that 
it is relatively easy to manage and upgrade debian packages at Ubuntu. However, 
in the new CoreOS mesos bay [1], meos daemons will run at containers.

In summary, I think the correct strategy is to selectively containerize some 
COE daemons, but we don’t have to containerize *all* COE daemons.

[1] https://blueprints.launchpad.net/magnum/+spec/mesos-bay-with-coreos

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: November-26-15 2:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Using docker container to run COE daemons

Thanks Kai Qing, I filed a bp for mesos bay here 
https://blueprints.launchpad.net/magnum/+spec/mesos-in-container

On Thu, Nov 26, 2015 at 8:11 AM, Kai Qiang Wu 
> wrote:

Hi Jay,

For the Kubernetes COE container ways, I think @Hua Wang is doing that.

For the swarm COE, the swarm already has master and agent running in container

For the mesos, it still not have container work until now, Maybe someone 
already draft bp on it ? Not quite sure



Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Jay Lau ---26/11/2015 07:15:59 am---Hi, It is 
becoming more and more popular to use docker container]Jay Lau ---26/11/2015 
07:15:59 am---Hi, It is becoming more and more popular to use docker container 
run some

From: Jay Lau >
To: OpenStack Development Mailing List 
>
Date: 26/11/2015 07:15 am
Subject: [openstack-dev] [magnum] Using docker container to run COE daemons





Hi,

It is becoming more and more popular to use docker 

Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-11-27 Thread Egor Guz
Wanghua,

I don’t think moving flannel to the container is good idea. This is setup great 
for dev environment, but become too complex from operator point of view (you 
add extra Docker daemon and need extra Cinder volume for this daemon, also
keep in mind it makes sense to keep etcd data folder at Cinder storage as well 
because etcd is database). flannel has just there files without extra 
dependencies and it’s much easy to download it during cloud-init ;)

I agree that we have pain with building Fedora Atomic images, but instead of 
simplify this process we should switch to another more “friendly” images (e.g. 
Fedora/CentOS/Ubuntu) which we can easy build with disk builder.
Also we can fix CoreOS template (I believe people more asked about it instead 
of Atomic), but we may face similar to Atomic issues when we will try to 
integrate not CoreOS products (e.g. Calico or Weave)

―
Egor

From: 王华 >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, November 26, 2015 at 00:15
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap

Hi Hongbin,

The docker in master node stores data in /dev/mapper/atomicos-docker--data and 
metadata in /dev/mapper/atomicos-docker--meta. 
/dev/mapper/atomicos-docker--data and /dev/mapper/atomicos-docker--meta are 
logic volumes. The docker in minion node store data in the cinder volume, but 
/dev/mapper/atomicos-docker--meta and /dev/mapper/atomicos-docker--meta are not 
used. If we want to leverage Cinder volume for docker in master, should we drop 
/dev/mapper/atomicos-docker--meta and /dev/mapper/atomicos-docker--meta? I 
think it is not necessary to allocate a Cinder volume. It is enough to allocate 
two logic volumes for docker, because only etcd, flannel, k8s run in the docker 
daemon which need not a large amount of storage.

Best regards,
Wanghua

On Thu, Nov 26, 2015 at 12:40 AM, Hongbin Lu 
> wrote:
Here is a bit more context.

Currently, at k8s and swarm bay, some required binaries (i.e. etcd and flannel) 
are built into image and run at host. We are exploring the possibility to 
containerize some of these system components. The rationales are (i) it is 
infeasible to build custom packages into an atomic image and (ii) it is 
infeasible to upgrade individual component. For example, if there is a bug in 
current version of flannel and we know the bug was fixed in the next version, 
we need to upgrade flannel by building a new image, which is a tedious process.

To containerize flannel, we need a second docker daemon, called 
docker-bootstrap [1]. In this setup, pods are running on the main docker 
daemon, and flannel and etcd are running on the second docker daemon. The 
reason is that flannel needs to manage the network of the main docker daemon, 
so it needs to run on a separated daemon.

Daneyon, I think it requires separated storage because it needs to run a 
separated docker daemon (unless there is a way to make two docker daemons share 
the same storage).

Wanghua, is it possible to leverage Cinder volume for that. Leveraging external 
storage is always preferred [2].

[1] 
http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html#bootstrap-docker
[2] http://www.projectatomic.io/docs/docker-storage-recommendation/

Best regards,
Hongbin

From: Daneyon Hansen (danehans) 
[mailto:daneh...@cisco.com]
Sent: November-25-15 11:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap



From: 王华 >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, November 25, 2015 at 5:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [magnum]storage for docker-bootstrap

Hi all,

I am working on containerizing etcd and flannel. But I met a problem. As 
described in [1], we need a docker-bootstrap. Docker and docker-bootstrap can 
not use the same storage, so we need some disk space for it.

I reviewed [1] and I do not see where the bootstrap docker instance requires 
separate storage.

The docker in master node stores data in /dev/mapper/atomicos-docker--data and 
metadata in /dev/mapper/atomicos-docker--meta. The disk space left is too same 
for docker-bootstrap. Even if the root_gb of the instance flavor is 20G, only 
8G can be used in our image. I want to make it bigger. One way is 

[openstack-dev] OpenStack Developer Mailing List Digest November 21-27

2015-11-27 Thread Mike Perez
Perma link: 
http://www.openstack.org/blog/2015/11/openstack-developer-mailing-list-digest-november-20151121/

Success Bot Says
===

* vkmc: We got 7 new interns for the Outreachy program December-March 2015
  round.
* bauzas: Reno in place for Nova release notes.
* AJaeger: We now have Japanese Install Guides published for Liberty [1].
* robcresswell: Horizon had a bug day! We made good progress on categorizing
  new bugs and removing old ones, with many members of the community stepping
  up to help.
* AJaeger: The OpenStack Architecture Design Guide has been converted to RST
  [2].
* AJaeger: The Virtual Machine Image guide has been converted to RST [3].
* Ajaeger: Japanese Networking Guide is published as draft [4].
* Tell us yours via IRC with a message “#success [insert success]”.

Release countdown for week R-18, Nov 30 - Dec 4
===

* All projects following the cycle-with-milestones release model should be
  preparing for the milestone tag.
* Release Actions:

  - All deliverables must have Reno configured before adding a Mitaka-1
milestone tag.
  - Use openstack/releases repository to manage the Mitaka-1 milestone tags.
  - One time change, we will be simplifying how we specify the versions for
projects by moving to only using tags instead of the version entry for
setup.cfg.

* Stable release actions: Review stable/liberty branches for patches that have
  landed since the last release and determine if your deliverables need new
  tags.
* Important dates:

  - Deadline for requesting a Mitaka-1 milestone tag: December 3rd
  - Mitaka-2: Jan 19-21
  - Mitaka release schedule [5]

* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080527.html

Common OpenStack ‘Third-Party’ CI Solution - DONE!
==

* Ramy Asselin who has been spearheading the work for a common third-party CI
  solution announces things being done!

  - This solution uses the same tools and scripts as the upstream Jenkins CI
solution.
  - The documentation for setting up a 3rd party ci system on 2 VMs (1 private
that runs the CI jobs, and 1 public that hosts the log files) is now
available here [6] or [7].
  - There a number of companies today using this solution for their third party
CI needs.
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080058.html

Process Change For Closing Bugs When Patches Merge
==

* Today when a patch merges with ‘Closes-Bug’ in the commit message, that marks
  the associated bug as ‘Fix Committed’ to indicated fixed, but not in the
  release yet.
* The release team uses automated tools to mark bugs from ‘Fix Committed’ to
  ‘Fix Released’, but they’re not reliable due to Launchpad issues.
* Proposal for automated tools to improve reliability: Patches with
  ‘Closes-Bug’ in the commit message to have the bug status mark the associated
  bug as ‘Fix Released’ instead of ‘Fix Committed’.
* Doug would like to have this be in effect next week.
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078280.html

Move From Active Distrusting Model to Trusting Model


* Morgan Fainberg writes most projects have a distrusting policy that prevents
  the following scenario:

  - Employee from Company A writes code
  - Other Employee from Company A reviews code
  - Third Employee from Company A reviews and approves code.

* Proposal for a trusting model:

  - Code reviews still need 2x Core Reviewers (no change)
  - Code can be developed by a member of the same company as both core
reviewers (and approvers).
  - If the trust that is being given via this new policy is violated, the code
can [if needed], be reverted (we are using git here) and the actors in
question can lose core status (PTL discretion) and the policy can be
changed back to the "distrustful" model described above.

* Dolph Mathews provides scenarios where the “distrusting” model either did or
  would have helped:

  - Employee is reprimanded by management for not positively reviewing &
approving a coworkers patch.
  - A team of employees is pressured to land a feature with as fast as
   possible. Minimal community involvement means a faster path to "merged,”
   right?
  - A large group of reviewers from the author's organization repeatedly
throwing *many* careless +1s at a single patch. (These happened to not be
cores, but it's a related organizational behavior taken to an extreme.)
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080238.html

Stable Team PTL Nominations Are Open


* As discussed [8][9] of setting up a standalone stable maintenance team, we’ll
  be organizing PTL elections over the coming weeks.
* Stable team’s mission:

  - 

Re: [openstack-dev] [ceph] DevStack plugin for Ceph required for Mitaka-1 and beyond?

2015-11-27 Thread Sebastien Han
The code is here: https://github.com/openstack/devstack-plugin-ceph

> On 25 Nov 2015, at 13:49, Sebastien Han  wrote:
> 
> The patch just landed, as soon as I have the repo I’ll move the code and do 
> some testing.
> 
>> On 24 Nov 2015, at 16:20, Sebastien Han  wrote:
>> 
>> Hi Ramana,
>> 
>> I’ll resurrect the infra patch and put the project under the right namespace.
>> There is no plugin at the moment.
>> I’ve figured out that this is quite urgent and we need to solve this asap 
>> since devstack-ceph is used by the gate as well :-/
>> 
>> I don’t think there is much changes to do on the plugin itself.
>> Let’s see if we can make all of this happen before Mitaka-1… I highly doubt 
>> but we’ll see…
>> 
>>> On 24 Nov 2015, at 15:31, Ramana Raja  wrote:
>>> 
>>> Hi,
>>> 
>>> I was trying to figure out the state of DevStack plugin
>>> for Ceph, but couldn't find its source code and ran into
>>> the following doubt. At Mitaka 1, i.e. next week, wouldn't
>>> Ceph related Jenkins gates (e.g. Cinder's gate-tempest-dsvm-full-ceph)
>>> that still use extras.d's hook script instead of a plugin, stop working?
>>> For reference,
>>> https://github.com/openstack-dev/devstack/commit/1de9e330de9fd509fcdbe04c4722951b3acf199c
>>> [Deepak, thanks for reminding me about the deprecation of extra.ds.]
>>> 
>>> The patch that seeks to integrate Ceph DevStack plugin with Jenkins
>>> gates is under review,
>>> https://review.openstack.org/#/c/188768/
>>> It's outdated as the devstack-ceph-plugin it seeks to integrate
>>> seem to be in the now obsolete namespace, 'stackforge/', and hasn't seen
>>> activity for quite sometime.
>>> 
>>> Even if I'm mistaken about all of this can someone please point me to
>>> the Ceph DevStack plugin's source code? I'm interested to know whether
>>> the plugin would be identical to the current Ceph hook script,
>>> extras.d/60-ceph.sh ?
>>> 
>>> Thanks,
>>> Ramana
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> Cheers.
>> 
>> Sébastien Han
>> Senior Cloud Architect
>> 
>> "Always give 100%. Unless you're giving blood."
>> 
>> Mail: s...@redhat.com
>> Address: 11 bis, rue Roquépine - 75008 Paris
>> 
> 
> 
> Cheers.
> 
> Sébastien Han
> Senior Cloud Architect
> 
> "Always give 100%. Unless you're giving blood."
> 
> Mail: s...@redhat.com
> Address: 11 bis, rue Roquépine - 75008 Paris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.

Sébastien Han
Senior Cloud Architect

"Always give 100%. Unless you're giving blood."

Mail: s...@redhat.com
Address: 11 bis, rue Roquépine - 75008 Paris



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][vpnaas] "SKIPPED: Neutron support is required" while running tox

2015-11-27 Thread bharath

Hi,

when running tox *"sudo -u stack -H tox -e api 
neutron.tests.api.test_vpnaas_extensions"*
test cases are failing with Error " setUpClass 
(neutron.tests.api.test_vpnaas_extensions.VPNaaSTestJSON) ... SKIPPED: 
Neutron support is required"


I even tried setting tempest_config_dir as below, still i am hitting the 
issue.


   [testenv:api]
   basepython = python2.7
   passenv = {[testenv]passenv} TEMPEST_CONFIG_DIR
   setenv = {[testenv]setenv}
 OS_TEST_PATH=./neutron/tests/api
   *TEMPEST_CONFIG_DIR={env:TEMPEST_CONFIG_DIR:/opt/stack/tempest/etc}*
 OS_TEST_API_WITH_REST=1



Can someone help me out


Thanks,
bharath


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][kuryr] ipam driver implementation

2015-11-27 Thread Vikas Choudhary
Hi Team,

I would like to request you all to go through
https://review.openstack.org/#/c/248042/ once you get some time, so that we
can leverage next weekly meeting time efficiently.


Thanks
Vikas Choudhary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-27 Thread 金运通
in case there are 50 VMs on host and operator want to migration them all to
maintain/shutdown the host, to avoid OOM in this case, also need consider
host memory when increase case size.

BR,
YunTongJin

2015-11-27 22:25 GMT+08:00 金运通 :

> I think it'd be necessary to set live_migration_compression=on|off
> dynamic according to memory and cpu on host at the beginning of compression
> migration, consider about the case there are 50 VMs on host and operator
> want to migration them all to maintain/shutdown the host, having
> compression=on|off  dynamically will avoid host OOM, and also with this,
> we can even consider to left scheduler out (aka, not alert scheduler about
> memory/cpu consume of compression).
>
>
> BR,
> YunTongJin
>
> 2015-11-27 21:58 GMT+08:00 Daniel P. Berrange :
>
>> On Fri, Nov 27, 2015 at 01:01:15PM +, Koniszewski, Pawel wrote:
>> > > -Original Message-
>> > > From: Daniel P. Berrange [mailto:berra...@redhat.com]
>> > > Sent: Friday, November 27, 2015 1:24 PM
>> > > To: Koniszewski, Pawel
>> > > Cc: OpenStack Development Mailing List (not for usage questions);
>> ???; Feng,
>> > > Shaohe; Xiao, Guangrong; Ding, Jian-feng; Dong, Eddie; Wang, Yong Y;
>> Jin,
>> > > Yuntong
>> > > Subject: Re: [openstack-dev] [nova] [RFC] how to enable xbzrle
>> compress for
>> > > live migration
>> > >
>> > > On Fri, Nov 27, 2015 at 12:17:06PM +, Koniszewski, Pawel wrote:
>> > > > > -Original Message-
>> > > > > > > Doing this though, we still need a solution to the host OOM
>> > > > > > > scenario problem. We can't simply check free RAM at start of
>> > > > > > > migration and see if there's enough to spare for compression
>> > > > > > > cache, as the schedular can spawn a new guest on the compute
>> > > > > > > host at any time, pushing us into OOM. We really need some way
>> > > > > > > to indicate that there is a (potentially very large) extra RAM
>> > > > > > > overhead for the guest during
>> > > > > migration.
>> > > >
>> > > > What about CPU? We might end up with live migration that degrades
>> > > > performance of other VMs on source and/or destination node. AFAIK
>> CPUs
>> > > > are heavily oversubscribed in many cases and this does not help.
>> > > > I'm not sure that this thing fits into Nova as it requires resource
>> > > > monitoring.
>> > >
>> > > Nova already has the ability to set CPU usage tuning rules against
>> each VM.
>> > > Since the CPU overhead is attributed to the QEMU process, these
>> existing
>> > > tuning rules will apply. So there would only be impact on other VMs,
>> if you
>> > > do
>> > > not have any CPU tuning rules set in Nova.
>> >
>> > Not sure I understand it correctly, I assume that you are talking about
>> CPU
>> > pinning. Does it mean that compression/decompression runs as part of VM
>> > threads?
>> >
>> > If not then, well, it will require all VMs to be pinned on both hosts,
>> source
>> > and destination (and in the whole cluster because of static
>> configuration...).
>> > Also what about operating system performance? Will QEMU distinct OS
>> processes
>> > somehow and won't affect them?
>>
>> The compression runs in the migration thread of QEMU. This is not a vCPU
>> thread, but one of the QEMU emulator threads. So CPU usage policy set
>> against the QEMU emulator threads applies to the compression CPU overhead.
>>
>> > Also, nova can reserve some memory for the host. Will QEMU also respect
>> it?
>>
>> No, its not QEMU's job to respect that. If you want to reserve resources
>> for only the host OS, then you need to setup suitable cgroup partitions
>> to separate VM from non-VM processes. The Nova reserved memory setting
>> is merely a hint to the schedular - it has no functional effect on its
>> own.
>>
>> Regards,
>> Daniel
>> --
>> |: http://berrange.com  -o-
>> http://www.flickr.com/photos/dberrange/ :|
>> |: http://libvirt.org  -o-
>> http://virt-manager.org :|
>> |: http://autobuild.org   -o-
>> http://search.cpan.org/~danberr/ :|
>> |: http://entangle-photo.org   -o-
>> http://live.gnome.org/gtk-vnc :|
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] using reno for libraries

2015-11-27 Thread Doug Hellmann
Liaisons,

We're making good progress on adding reno to service projects as
we head to the Mitaka-1 milestone. Thank you!

We also need to add reno to all of the other deliverables with
changes that might affect deployers. That means clients and other
libraries, SDKs, etc. with configuration options or where releases
can change deployment behavior in some way. Now that most teams
have been through this conversion once, it should be easy to replicate
for the other repositories in a similar way.

Libraries have 2 audiences for release notes: developers consuming
the library and deployers pushing out new versions of the libraries.
To separate the notes for the two audiences, and avoid doing manually
something that we have been doing automatically, we can use reno
just for deployer release notes (changes in support for options,
drivers, etc.). That means the library repositories that need reno
should have it configured just like for the service projects, with
the separate jobs and a publishing location different from their
existing developer documentation. The developer docs can continue
to include notes for the developer audience.

After we start using reno for libraries, the release announcement
email tool will be updated to use those same notes to build the
message in addition to looking at the git change log. This will be
a big step toward unifying the release process for services and
libraries, and will allow us to make progress on completing the
automation work we have planned for this cycle.

It's not necessary to add reno to the liberty branch for library
projects, since we tend to backport far fewer changes to libraries.
If you maintain a library that does see a lot of backports, by all
means go ahead and add reno, but it's not a requirement. If you do
set up multiple branches, make sure you have one page that uses the
release-notes directive without specifing a branch, as in the
oslo.config example, to build notes for the "current" branch to get
releases from master and to serve as a test for rendering notes
added to stable branches.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-27 Thread Murray, Paul (HP Cloud)


> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: 26 November 2015 17:58
> To: Carlton, Paul (Cloud Services)
> Cc: 少合冯; OpenStack Development Mailing List (not for usage questions);
> John Garbutt; pawel.koniszew...@intel.com; yuntong@intel.com;
> shaohe.f...@intel.com; Murray, Paul (HP Cloud); liyong.q...@intel.com
> Subject: Re: [nova] [RFC] how to enable xbzrle compress for live migration
> 
> On Thu, Nov 26, 2015 at 05:49:50PM +, Paul Carlton wrote:
> > Seems to me the prevailing view is that we should get live migration
> > to figure out the best setting for itself where possible.  There was
> > discussion of being able have a default policy setting that will allow
> > the operator to define balance between speed of migration and impact
> > on the instance.  This could be a global default for the cloud with
> > overriding defaults per aggregate, image, tenant and instance as well
> > as the ability to vary the setting during the migration operation.
> >
> > Seems to me that items like compression should be set in configuration
> > files based on what works best given the cloud operator's environment?
> 
> Merely turning on use of compression is the "easy" bit - there needs to be a
> way to deal with compression cache size allocation, which needs to have
> some smarts in Nova, as there's no usable "one size fits all" value for the
> compression cache size. If we did want to hardcode a compression cache
> size, you'd have to pick set it as a scaling factor against the guest RAM 
> size.
> This is going to be very heavy on memory usage, so there needs careful
> design work to solve the problem of migration compression triggering host
> OOM scenarios, particularly since we can have multiple concurrent
> migrations.
> 


Use cases for live migration generally fall into two types:

1. I need to empty the host (host maintenance/reboot)

2. I generally want to balance load on the cloud

The first case is by far the most common need right now and in that case the 
node gets progressively more empty as VMs are moved off. So the resources 
available for caching etc. grow as the process goes on.

The second case is less likely to be urgent from the operators point of view, 
so doing things more slowly may not be a problem.

So looking at how much resource is available at the start of a migration and 
deciding then what to do on a per VM basis is probably not a bad idea. 
Especially if we can differentiate between the two cases.


> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]

2015-11-27 Thread Serguei Bezverkhi (sbezverk)

Hello team,

I came across an issue  in Liberty  where NumaTopology Filter was failing my 
request to launch an instance because I was requesting strict cpu pinning which 
happens to be for socket 1, but at the same time requesting SRIOV ports from 
PCIe device with NUMA affinity to socket 0.  It has been confirmed as behavior 
by design. Now when I tried to figure out which numa socket a specific SRIOV 
port is bound to by using OpenStack commands, I could not. I would like to 
suggest to add Numa affinity field to all physical resources managed by 
OpenStack (at this point I could think of SRIOV PF and VF) and print it when 
show of information for a physical resource is requested..  In this case it 
will be much easier to select right ports or resources based on its numa 
affinity.

Thank you and appreciate your thoughts/comments

Serguei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-27 Thread Jordan Pittier
Hi,
I think this script is valuable to some users: Rally and Red Hat expressed
their needs, they seem clear.

This tool is far from bullet proof and if used blindly or in case of bugs,
Tempest could be misconfigured. So, we could have this tool inside the
Tempest repository (in the tools/) but not use it at all for the Gate.

I am not sure I fully understand the resistance for this, if we don"t use
this config generator for the gate, what's the risk ?

Jordan

On Fri, Nov 27, 2015 at 8:05 AM, Ken'ichi Ohmichi 
wrote:

> 2015-11-27 15:40 GMT+09:00 Daniel Mellado :
> > I still do think that even if there are some issues addressed to the
> > feature, such as skipping tests in the gate, the feature itself it's
> still
> > good -we just won't use it for the gates-
> > Instead it'd be used as a wrapper for a user who would be interested on
> > trying it against a real/reals clouds.
> >
> > Ken, do you really think a tempest user should know all tempest options?
> > As you pointed out there are quite a few of them and even if they should
> at
> > least know their environment, this script would set a minimum acceptable
> > default. Do you think PTL and Pre-PTL concerns that we spoke of would
> still
> > apply to that scenario?
>
> If Tempest users run part of tests of Tempest, they need to know the
> options which are used with these tests only.
> For example, current Tempest contains ironic API tests and the
> corresponding options.
> If users don't want to run these tests because the cloud don't support
> ironic API, they don't need to know/setup these options.
> I feel users need to know necessary options which are used on tests
> they want, because they need to investigate the reason if facing a
> problem during Tempest tests.
>
> Now Tempest options contain their default values, but you need a
> script for changing them from the default.
> Don't these default values work for your cloud at all?
> If so, these values should be changed to better.
>
> Thanks
> Ken Ohmichi
>
> ---
>
> > Andrey, Yaroslav. Would you like to revisit the blueprint to adapt it to
> > tempest-cli improvements? What do you think about this, Masayuki?
> >
> > Thanks for all your feedback! ;)
> >
> > El 27/11/15 a las 00:15, Andrey Kurilin escribió:
> >
> > Sorry for wrong numbers. The bug-fix for issue with counters is merged.
> > Correct numbers(latest result from rally's gate[1]):
> >  - total number of executed tests: 1689
> >  - success: 1155
> >  - skipped: 534 (neutron,heat,sahara,ceilometer are disabled. [2] should
> > enable them)
> >  - failed: 0
> >
> > [1] -
> >
> http://logs.openstack.org/27/246627/11/gate/gate-rally-dsvm-verify-full/800bad0/rally-verify/7_verify_results_--html.html.gz
> > [2] - https://review.openstack.org/#/c/250540/
> >
> > On Thu, Nov 26, 2015 at 3:23 PM, Yaroslav Lobankov <
> yloban...@mirantis.com>
> > wrote:
> >>
> >> Hello everyone,
> >>
> >> Yes, I am working on this now. We have some success already, but there
> is
> >> a lot of work to do. Of course, some things don't work ideally. For
> example,
> >> in [2] from the previous letter we have not 24 skipped tests, actually
> much
> >> more. So we have a bug somewhere :)
> >>
> >> Regards,
> >> Yaroslav Lobankov.
> >>
> >> On Thu, Nov 26, 2015 at 3:59 PM, Andrey Kurilin 
> >> wrote:
> >>>
> >>> Hi!
> >>> Boris P. and I tried to push a spec[1] for automation tempest config
> >>> generator, but we did not succeed to merge it. Imo, qa-team doesn't
> want to
> >>> have such tool:(
> >>>
> >>> >However, there is a big concern:
> >>> >If the script contain a bug and creates the configuration which makes
> >>> >most tests skipped, we cannot do enough tests on the gate.
> >>> >Tempest contains 1432 tests and difficult to detect which tests are
> >>> >skipped as unexpected.
> >>>
> >>> Yaroslav Lobankov is working on improvement for tempest config
> generator
> >>> in Rally. Last time when we launch full tempest run[2], we got 1154
> success
> >>> tests and only 24 skipped. Also, there is a patch, which adds x-fail
> >>> mechanism(it based on subunit-filter): you can transmit a file with
> test
> >>> names + reasons and rally will modify results.
> >>>
> >>> [1] - https://review.openstack.org/#/c/94473/
> >>>
> >>> [2] -
> >>>
> http://logs.openstack.org/49/242849/8/check/gate-rally-dsvm-verify/e91992e/rally-verify/7_verify_results_--html.html.gz
> >>>
> >>> On Thu, Nov 26, 2015 at 1:52 PM, Ken'ichi Ohmichi <
> ken1ohmi...@gmail.com>
> >>> wrote:
> 
>  Hi Daniel,
> 
>  Thanks for pointing this up.
> 
>  2015-11-25 1:40 GMT+09:00 Daniel Mellado  >:
>  > Hi All,
>  >
>  > As you might already know, within Red Hat's tempest fork, we do have
>  > one
>  > tempest configuration script which was built in the past by David
>  > Kranz [1]
>  > and that's been actively used in our CI system. Regarding this
> topic,
> 

[openstack-dev] [Performance] Filling performance team working items list and taking part in their resolving

2015-11-27 Thread Dina Belova
Hey OpenStack devs and operators!

Folks, I would like to share list of working items Performance Team is
currently having in the backlog -
https://etherpad.openstack.org/p/perf-zoom-zoom [Work Items to grab]
section. I'm really encouraging you to fill it with concrete pieces of work
you think will be useful and take part in the development/investigation by
assigning some of them to yourself and working on them :)

Cheers,
Dina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More and more circular build dependencies: what can we do to stop this?

2015-11-27 Thread Thomas Goirand
On 11/26/2015 05:31 PM, Robert Collins wrote:
> On 27 November 2015 at 03:50, Thomas Goirand  wrote:
>> Hi,
>>
>> As a package maintainer, I'm seeing more and more circular
>> build-dependency. The latest of them is between oslotest and oslo.config
>> in Mitaka.

The situation with oslotest is even more annoying than what I just
wrote: it needs os-client-config, debtcollector, on top of oslo.config
which I wrote above. All of them need oslotest to build.

>> There's been some added between unittest2, linecache2 and traceback2
>> too, which are now really broadly used.
>>
>> The only way I can work around this type of issue is to temporarily
>> disable the unit tests (or allow them to fail), build both packages, and
>> revert the unit tests tweaks. That's both annoying and frustrating to do.
>>
>> What can we do so that it doesn't constantly happen again and again?
>> It's a huge pain for downstream package maintainers and distros.
>>
>> Cheers,
>>
>> Thomas Goirand (zigo)
> 
> Firstly, as Thierry says, we're not well equipped to stop things
> happening without tests, its the nature of a multi-thousand developer
> structure.
> 
> Secondly, the cases you site are not circular build dependencies: they
> are circular test dependencies, which are not the same thing.
> 
> I realise that the Debian and RPM tooling around this has historically
> been weak, but its improving -
> https://wiki.debian.org/DebianBootstrap#Circular_dependencies.2Fstaged_builds
> - covers the current state of the art, and should, AIUI, entirely
> address your needs: you do one build that is just a pure build with no
> tests-during-build-time, then when the build phase of everything is
> covered, a second stage 'normal' build that includes tests.
> 
> -Rob

Robert,

I'm well aware that this is circular test dependencies. Though not only:
it also impacts sphinx docs.

At the current moment, it's not easy at all to bootstrap a new release
of OpenStack without cheating (ie: picking some already built
dependencies from the past releases).

Yes, I can work around all of these. The thing is, I would prefer if
these were optional by skipping tests if the dependency wasn't present,
because it is very time consuming. So if something could be done
upstream to mitigate as much as possible this problem, instead of just
completely ignoring it, it would be really nice.

So, the answer of Thierry is: we should test for it. The question is,
how? I saw some build dependency graphs done with graphviz. How are they
produced? That would be a good starting point to know what's the current
state of things.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Supporting Django 1.9

2015-11-27 Thread Thomas Goirand
Hi,

Django 1.9 is due to be released in early December, and will reach
Debian Sid soon after that. It'd be nice to have fixes for it ASAP. I
already have bugs against some of the packages I maintain:

https://bugs.debian.org/806365
https://bugs.debian.org/806362

Any help (patches sent upstream and/or to the Debian BTS) to fix these
would be greatly appreciated.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-27 Thread Igor Kalnitsky
Hey Vladimir,

Thanks for your effort on doing this job. Unfortunately we have not so
much time left and FF is coming, so I'm afraid it's become unreal to
make it before FF. Especially if it takes 2-3 days to fix system
tests.


Andrew,

I had the same opinion some time ago, but it was changed because
nobody puts effort to fix our Docker experience. Moreover Docker is
still buggy and we have a plenty of issues such as stale mount points
for instance. Besides, I don't like our upgrade procedure -

1. Install fuel-docker-images.rpm
2. Load images from installed tarball to Docker
3. Re-create containers from new images

Where (2) and (3) are manual steps and breaks idea of "yum update"
delivery approach.

Thanks,
Igor

On Wed, Nov 25, 2015 at 9:43 PM, Andrew Woodward  wrote:
> 
> IMO, removing the docker containers is a mistake v.s. fixing them and using
> them properly. They provide an isolation that is necessary (and that we
> mangle) to make services portable and scaleable. We really should sit down
> and document how we really want all of the services to interact before we
> rip the containers out.
>
> I agree, the way we use containers now still is quite wrong, and brings us
> some negative value, but I'm not sold on stripping them out now just because
> they no longer bring the same upgrades value as before.
> 
>
> My opinion aside, we are rushing into this far to late in the feature cycle.
> Prior to moving forward with this, we need a good QA plan, the spec is quite
> light on that and must receive review and approval from QA. This needs to
> include an actual testing plan.
>
> From the implementation side, we are pushing up against the FF deadline. We
> need to document what our time objectives are for this and when we will no
> longer consider this for 8.0.
>
> Lastly, for those that are +1 on the thread here, please review and comment
> on the spec, It's received almost no attention for something with such a
> large impact.
>
> On Tue, Nov 24, 2015 at 4:58 PM Vladimir Kozhukalov
>  wrote:
>>
>> The status is as follows:
>>
>> 1) Fuel-main [1] and fuel-library [2] patches can deploy the master node
>> w/o docker containers
>> 2) I've not built experimental ISO yet (have been testing and debugging
>> manually)
>> 3) There are still some flaws (need better formatting, etc.)
>> 4) Plan for tomorrow is to build experimental ISO and to begin fixing
>> system tests and fix the spec.
>>
>> [1] https://review.openstack.org/#/c/248649
>> [2] https://review.openstack.org/#/c/248650
>>
>> Vladimir Kozhukalov
>>
>> On Mon, Nov 23, 2015 at 7:51 PM, Vladimir Kozhukalov
>>  wrote:
>>>
>>> Colleagues,
>>>
>>> I've started working on the change. Here are two patches (fuel-main [1]
>>> and fuel-library [2]). They are not ready to review (still does not work and
>>> under active development). Changes are not going to be huge. Here is a spec
>>> [3]. Will keep the status up to date in this ML thread.
>>>
>>>
>>> [1] https://review.openstack.org/#/c/248649
>>> [2] https://review.openstack.org/#/c/248650
>>> [3] https://review.openstack.org/#/c/248814
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Mon, Nov 23, 2015 at 3:35 PM, Aleksandr Maretskiy
>>>  wrote:



 On Mon, Nov 23, 2015 at 2:27 PM, Bogdan Dobrelya
  wrote:
>
> On 23.11.2015 12:47, Aleksandr Maretskiy wrote:
> > Hi all,
> >
> > as you know, Rally runs inside docker on Fuel master node, so docker
> > removal (good improvement) is a problem for Rally users.
> >
> > To solve this, I'm planning to make native Rally installation on Fuel
> > master node that is running on CentOS 7,
> > and then make a step-by-step instruction how to make this
> > installation.
> >
> > So I hope docker removal will not make issues for Rally users.
>
> I believe the most backwards compatible scenario is to keep the docker
> installed while removing the fuel-* docker things back to the host OS.
> So nothing would prevent user from pulling and running whichever docker
> containers he wants to put on the Fuel master node. Makes sense?
>

 Sounds good


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
>
> 

Re: [openstack-dev] [Fuel] CentOS-7 Transition Plan

2015-11-27 Thread Igor Kalnitsky
Hey Roman,

Few notes about fuel-web patches:

* https://review.openstack.org/#/c/246535/ - Could be (and should be)
merged after FF.
* https://review.openstack.org/#/c/246531/ - Has -1 from Jenkins.
Looks like a floating test failure, so I've restarted tests. But
please track this one, and fix it if it fails again.

Thanks,
Igor

On Thu, Nov 26, 2015 at 12:22 PM, Roman Vyalov  wrote:
> Hi all,
> Part of those change requests may be merged shortly (today). They are
> compatible with Centos6.
> List of change requests ready to merge (compatible with Centos6):
> Fuel-library
>
> https://review.openstack.org/#/c/247066/
> https://review.openstack.org/#/c/248781/
> https://review.openstack.org/#/c/247727/
>
> Fuel-nailgun-agent
>
> https://review.openstack.org/#/c/244810/
>
> Fuel-web
>
> https://review.openstack.org/#/c/248206/
> https://review.openstack.org/#/c/246531/
> https://review.openstack.org/#/c/246535/
>
> Python-fuelclient
>
> https://review.openstack.org/#/c/231935/
>
> Fuel-ostf
>
> https://review.openstack.org/#/c/248096/
>
> Fuel-menu
>
> https://review.openstack.org/#/c/246888/
>
>
> List with all change requests related to the support Centos7:
> https://etherpad.openstack.org/p/fuel_on_centos7
>
> On Tue, Nov 24, 2015 at 4:37 PM, Oleg Gelbukh  wrote:
>>
>> That's good to know, thank you, Vladimir, Dmitry.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Tue, Nov 24, 2015 at 3:10 PM, Vladimir Kozhukalov
>>  wrote:
>>>
>>> In fact, we (I and Dmitry) are on the same page of how to merge these two
>>> features (Centos7 and Docker removal). We agreed that Dmitry's feature is
>>> much more complicated and of higher priority. So, Centos 7 should be merged
>>> first and then I'll rebase my patches (mostly supervisor -> systemd).
>>>
>>>
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Tue, Nov 24, 2015 at 1:57 AM, Igor Kalnitsky 
>>> wrote:

 Hey Dmitry,

 Thank you for your effort. I believe it's a huge step forward that
 opens number of possibilities.

 > Every container runs systemd as PID 1 process instead of
 > supervisord or application / daemon.

 Taking into account that we're going to drop Docker containers, I
 think it was unnecessary complication of your work.

 Please sync-up with Vladimir Kozhukalov, he's working on getting rid
 of containers.

 > Every service inside a container is a systemd unit. Container build
 > procedure was modified, scripts setup.sh and start.sh were introduced
 > to be running during building and configuring phases respectively.

 Ditto. :)

 Thanks,
 Igor

 P.S: I wrote the mail and forgot to press "send" button. It looks like
 Oleg is already pointed out that I wanted to.

 On Mon, Nov 23, 2015 at 2:37 PM, Oleg Gelbukh 
 wrote:
 > Please, take into account the plan to drop the containerization of
 > Fuel
 > services:
 >
 > https://review.openstack.org/#/c/248814/
 >
 > --
 > Best regards,
 > Oleg Gelbukh
 >
 > On Tue, Nov 24, 2015 at 12:25 AM, Dmitry Teselkin
 > 
 > wrote:
 >>
 >> Hello,
 >>
 >> We've been working for some time on bringing CentOS-7 to master node,
 >> and now is the time to share and discuss the transition plan.
 >>
 >> First of all, what have been changed:
 >> * Master node itself runs on CentOS-7. Since all the containers share
 >>   the same repo as master node they all have been migrated to
 >> CentOS-7
 >>   too. Every container runs systemd as PID 1 process instead of
 >>   supervisord or application / daemon.
 >> * Every service inside a container is a systemd unit. Container build
 >>   procedure was modified, scripts setup.sh and start.sh were
 >> introduced
 >>   to be running during building and configuring phases respectively.
 >>   The main reason for this was the fact that many puppet manifests
 >> use
 >>   service management commands that require systemd daemon running.
 >> This
 >>   also allowed to simplify Dockerfiles by removing all actions to
 >>   setup.sh file.
 >> * We managed to find some bugs in various parts that were fixed too.
 >> * Bootstrap image is also CentOS-7 based. It was updated to better
 >>   support it - some services converted to systemd units and fixes to
 >>   support new network naming schema were made.
 >> * ISO build procedure was updated to reflect changes in CentOS-7
 >>   distribution and to support changes in docker build procedure.
 >> * Many applications was updated (puppet, docker, openstack
 >>   components).
 >> * Docker containers moved to LVM volume to improve performance and
 >> get
 >>   rid of annoying warning messages during master node deployment.

Re: [openstack-dev] [horizon] Supporting Django 1.9

2015-11-27 Thread Rob Cresswell (rcresswe)
Mitaka will support 1.9. I’m already working on it :)
Liberty is >= 1.7 and < 1.9, so shouldn’t matter.

Rob


On 27/11/2015 09:23, "Thomas Goirand"  wrote:

>Hi,
>
>Django 1.9 is due to be released in early December, and will reach
>Debian Sid soon after that. It'd be nice to have fixes for it ASAP. I
>already have bugs against some of the packages I maintain:
>
>https://bugs.debian.org/806365
>https://bugs.debian.org/806362
>
>Any help (patches sent upstream and/or to the Debian BTS) to fix these
>would be greatly appreciated.
>
>Cheers,
>
>Thomas Goirand (zigo)
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Supporting Django 1.9

2015-11-27 Thread Matthias Runge
On 27/11/15 10:23, Thomas Goirand wrote:
> Hi,
> 
> Django 1.9 is due to be released in early December, and will reach
> Debian Sid soon after that. It'd be nice to have fixes for it ASAP. I
> already have bugs against some of the packages I maintain:

I would expect upstream to go the same route as before: supporting the
last LTS version (which is currently Django-1.8), and the latest release
too.

We in horizon are currently working on fixing deprecation notes[1]

Matthias


[1] https://blueprints.launchpad.net/horizon/+spec/drop-dj17





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] do we really need websockify with numpy speedups?

2015-11-27 Thread Pavlo Shchelokovskyy
Hi Roman,

those wheels still have to be built and maintained by someone. As there are
no wheels for Linux in upstream PyPI, those would have to be built and
maintained by openstack-infra, and I'm not sure how big variety of
platforms has to be supported. Just taking corresponding deb/rpm package
from upstream seems better option in such case.

Cheers,

On Thu, Nov 26, 2015 at 3:57 PM Roman Podoliaka 
wrote:

> Hi Pavlo,
>
> Can we just use a wheel package for numpy instead?
>
> Thanks,
> Roman
>
> On Thu, Nov 26, 2015 at 3:00 PM, Pavlo Shchelokovskyy
>  wrote:
> > Hi again,
> >
> > I've went on and created a proper pull request to websockify [0], comment
> > there if you think we need it :)
> >
> > I also realized that there is another option, which is to include
> > python-numpy to files/debs/ironic and files/debs/nova (strangely it is
> > already present in rpms/ for nova, noVNC and spice services).
> > This should install a pre-compiled version from distro repos, and should
> > also speed things up.
> >
> > Any comments welcome.
> >
> > [0] https://github.com/kanaka/websockify/pull/212
> >
> > Best regards,
> >
> > On Thu, Nov 26, 2015 at 1:44 PM Pavlo Shchelokovskyy
> >  wrote:
> >>
> >> Hi all,
> >>
> >> I was long puzzled why devstack is installing numpy. Being a fantastic
> >> package itself, it has the drawback of taking about 4 minutes to
> compile its
> >> C extensions when installing on our gates (e.g. [0]). I finally took
> time to
> >> research and here is what I've found:
> >>
> >> it is used only by websockify package (installed by AFAIK ironic and
> nova
> >> only), and there it is used to speed up the HyBi protocol. Although the
> code
> >> itself has a path to work without numpy installed [1], the setup.py of
> >> websockify declares numpy as a hard dependency [2].
> >>
> >> My question is do we really need those speedups? Do we test any feature
> >> requiring fast HyBi support on gates? Not installing numpy would shave 4
> >> minutes off any gate job that is installing Nova or Ironic, which seems
> like
> >> a good deal to me.
> >>
> >> If we decide to save this time, I have prepared a pull request for
> >> websockify that moves numpy requirement to "extras" [3]. As a
> consequence
> >> numpy will not be installed by default as dependency, but still
> possible to
> >> install with e.g. "pip install websockify[fastHyBi]", and package
> builders
> >> can also specify numpy as hard dependency for websockify package in
> package
> >> specs.
> >>
> >> What do you think?
> >>
> >> [0]
> >>
> http://logs.openstack.org/82/236982/6/check/gate-tempest-dsvm-ironic-agent_ssh/1141960/logs/devstacklog.txt.gz#_2015-11-11_19_51_40_784
> >> [1]
> >>
> https://github.com/kanaka/websockify/blob/master/websockify/websocket.py#L143
> >> [2] https://github.com/kanaka/websockify/blob/master/setup.py#L37
> >> [3]
> >>
> https://github.com/pshchelo/websockify/commit/0b1655e73ea13b4fba9c6fb4122adb1435d5ce1a
> >>
> >> Best regards,
> >> --
> >> Dr. Pavlo Shchelokovskyy
> >> Senior Software Engineer
> >> Mirantis Inc
> >> www.mirantis.com
> >
> > --
> > Dr. Pavlo Shchelokovskyy
> > Senior Software Engineer
> > Mirantis Inc
> > www.mirantis.com
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstak-dev][Fuel] Not properly merged specifications

2015-11-27 Thread Igor Kalnitsky
Hey Sergii,

Yeah, I had a conversation with Dmitry B. and we have decided on the
following points:

* Component Leads (CLs) must ensure that specs are reviewed and +1 by SMEs
* All CLs have to +1/+2 on every spec (even if it's almost unrelated
to other components).
* Iff spec has pluses from all component leads, it could be approved by PTL.

These rules should help to cover as much design gaps as possible, and
they also help to understand what's going on in the project.

Thanks,
Igor


On Wed, Nov 25, 2015 at 5:04 PM, Sergii Golovatiuk
 wrote:
> Hi Fuelers,
>
> Today, I noticed that many specifications are merged without +1/+2 from
> different components [1]. Please add components leads to all specifications.
> Please add SME to make sure the specification covers all fuel components.
>
> [1] https://review.openstack.org/#/c/229063/
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] "SKIPPED: Neutron support is required" while running tox

2015-11-27 Thread bharath

Hi,

when running tox *"sudo -u stack -H tox -e api 
neutron.tests.api.test_vpnaas_extensions"*
test cases are failing with Error " setUpClass 
(neutron.tests.api.test_vpnaas_extensions.VPNaaSTestJSON) ... SKIPPED: 
Neutron support is required"


I even tried setting tempest_config_dir as below, still i am hitting the 
issue.


   [testenv:api]
   basepython = python2.7
   passenv = {[testenv]passenv} TEMPEST_CONFIG_DIR
   setenv = {[testenv]setenv}
 OS_TEST_PATH=./neutron/tests/api
   *TEMPEST_CONFIG_DIR={env:TEMPEST_CONFIG_DIR:/opt/stack/tempest/etc}*
 OS_TEST_API_WITH_REST=1



Can someone help me out


Thanks,
bharath
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev