Re: [lxc-users] Desperately searching for Template for Fedora 22

2015-06-29 Thread Fajar A. Nugraha
On Tue, Jun 30, 2015 at 9:07 AM, Fajar A. Nugraha  wrote:
> On Tue, Jun 30, 2015 at 6:50 AM, Federico Alves  wrote:
>> I am looking for a template for Fedora 22 and it does not seem to exist.
>> How would I install this version without a template? Is it even possible?
>
> One way would be to try fedora template from latest git, just in case
> there are already updates that made it work with F22. At a quick
> glance it should at least support F21.


... and I just tested "lxc-create -n f22 -t fedora -B zfs -- -R 22" on
ubuntu 15.04, lxc 1.1.2+master~20150616-1, lxcfs
0.9-0ubuntu1~ubuntu15.0. Takes a long time, but works as expected.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Best backing store for 500 containers

2015-06-29 Thread Fajar A. Nugraha
On Tue, Jun 30, 2015 at 7:16 AM, Federico Alves  wrote:
> I need to create 500 identical containers, but after the first one, I don´t
> want to repeat the same file 500 times. The disk is formatted ext4. What
> should be the best type of format or partition that would be 100% sparse,
> i.e., it would never repeat  the same information.

That's not the definition of sparse: https://en.wikipedia.org/wiki/Sparse_file

If you want to "create a container one time and clone it for the other
499", then you can create container_1 container using
snapshot-cabapble backingstore (e.g. zfs), then run "lxc-clone -s
container_1 container_2". This should create container 2 using
snapshot/clone feature of the storage (at least this works on zfs,
should work on btrfs as well) so that the only additional space that
will be used is only for changed files/blocks (e.g. container config,
/etc/hosts, and so on). Note that as the containers get used, the
changed files will increase (e.g. logs, database files), and those
changed files will use additional space.

See also "man lxc-clone":
Creates a new container as a clone of an existing container. Two types
of clones are supported: copy and snapshot. A copy clone copies the
root filessytem from the original container to the new. A snapshot
filesystem uses the backing store's snapshot functionality to create a
 very small  copy-on-write snapshot of the original container.
Snapshot clones require the new container backing store to support
snapshotting. Currently this includes only aufs, btrfs, lvm, overlayfs
and zfs. LVM devices do not support snapshots of snapshots.


If you REALLY want a system that "would never repeat  the same
information", then you'd need dedup-capable storage. zfs can do that,
but it implies high overhead (e.g. much higher memory requirements
compared to normal, and needs a fast L2ARC), and should NOT be used
unless you REALLY know what you're doing.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Desperately searching for Template for Fedora 22

2015-06-29 Thread Fajar A. Nugraha
On Tue, Jun 30, 2015 at 6:50 AM, Federico Alves  wrote:
> I am looking for a template for Fedora 22 and it does not seem to exist.
> How would I install this version without a template? Is it even possible?

One way would be to try fedora template from latest git, just in case
there are already updates that made it work with F22. At a quick
glance it should at least support F21.

If that doesn't work, the generic way would be to get a working
installation (e.g. on qemu/whatever), then modify it using the
template as example. IIRC latest systemd should work fine in
privileged container.

If you don't have the skill nor time to do it, then you should just
probably wait patiently until someone fix the template.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Best backing store for 500 containers

2015-06-29 Thread Federico Alves
I need to create 500 identical containers, but after the first one, I don´t
want to repeat the same file 500 times. The disk is formatted ext4. What
should be the best type of format or partition that would be 100% sparse,
i.e., it would never repeat  the same information.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Desperately searching for Template for Fedora 22

2015-06-29 Thread Federico Alves
I am looking for a template for Fedora 22 and it does not seem to exist.
How would I install this version without a template? Is it even possible?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Get the ip address of the host that hosts a container

2015-06-29 Thread Luis M. Ibarra
You can't. The host and the container are in different network namespaces.
You can get only the default gw of your container which is the endpoint of
your pipe if you're using veth.

Regards,
On Jun 29, 2015 6:40 AM, "Thouraya TH"  wrote:

> Hi all
>
> Please, is there a command to get the ip address of the host that hosts a
> container ?
>
> i'd like to use this command inside the container.
>
> Thanks a lot.
> Best Regards.
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is container GUI an option?

2015-06-29 Thread jjs - mainphrame
FYI -

http://srobb.net/nxreplace.html

On Mon, Jun 29, 2015 at 12:21 PM, Robert Gierzinger <
robert.gierzin...@gmx.at> wrote:

> xpra might also be of interest for you.
>
> *Gesendet:* Montag, 29. Juni 2015 um 17:10 Uhr
> *Von:* "Federico Alves" 
> *An:* "LXC users mailing-list" 
> *Betreff:* [lxc-users] Is container GUI an option?
>  I have to create a large number of virtual machines, but with a GUI
> (Fedora 22, but I could use any Linux dist).
> Is there a known way to access each one of the containers GUIs without
> installing VNC in each session?. Ideally, we could execute a command and
> access the X session of any of the containers, from the main computer
> session.
> Is this fantasy?
>
>  ___ lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is container GUI an option?

2015-06-29 Thread Robert Gierzinger
xpra might also be of interest for you.
 

Gesendet: Montag, 29. Juni 2015 um 17:10 Uhr
Von: "Federico Alves" 
An: "LXC users mailing-list" 
Betreff: [lxc-users] Is container GUI an option?



I have to create a large number of virtual machines, but with a GUI (Fedora 22, but I could use any Linux dist).

Is there a known way to access each one of the containers GUIs without installing VNC in each session?. Ideally, we could execute a command and access the X session of any of the containers, from the main computer session.

Is this fantasy?
 

___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ntpdate errors in vivid container

2015-06-29 Thread Andrey Repin
Greetings, Joe McDonald!

> host is Ubuntu 14.04.2 LTS
> container is Ubuntu 15.04 (vivid)
> lxc is 1.1.2


> using bridging for networking.
> container /etc/network/interfaces looks like:
> auto lo
> iface lo inet loopback


> auto eth0
> iface eth0 inet static


> bringing it up, everything works fine, but there is a 2 minute delay in the
> bootup process.  delay from (i think bringing up network calls ntpdate):

Why on earth you call ntpdate from container?
You are not running ntp on the host?

> any idea why these errors are occuring?

Most likely because container has no way to change host time.


-- 
With best regards,
Andrey Repin
Monday, June 29, 2015 20:14:01

Sorry for my terrible english...
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is container GUI an option?

2015-06-29 Thread jjs - mainphrame
x2go has worked well for me, on both openvz containers and lxc containers.
Highly recommended.


On Mon, Jun 29, 2015 at 8:10 AM, Federico Alves  wrote:

> I have to create a large number of virtual machines, but with a GUI
> (Fedora 22, but I could use any Linux dist).
> Is there a known way to access each one of the containers GUIs without
> installing VNC in each session?. Ideally, we could execute a command and
> access the X session of any of the containers, from the main computer
> session.
> Is this fantasy?
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Is container GUI an option?

2015-06-29 Thread Federico Alves
I have to create a large number of virtual machines, but with a GUI (Fedora
22, but I could use any Linux dist).
Is there a known way to access each one of the containers GUIs without
installing VNC in each session?. Ideally, we could execute a command and
access the X session of any of the containers, from the main computer
session.
Is this fantasy?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC - Best way to avoid networking changes in a container

2015-06-29 Thread Benoit GEORGELIN - Association Web4all
Thanks for you answer. 

I was thinking about removing CAP_NET_ADMIN capabilities but I think this will 
have a huge impact on applications running into the container that need like 
socket and stuff like this.. 
I have to do a lot of test. Not sure yet what I'm gonna do.
The routing solution looks the most efficient .  

Creating a network namespace could be another good solution, less impacts and 
no host dependencies like routing tables. 

Everything should be up and running after a reboot from the host or from the 
container. 

Maybe it will be interesting to provide a unique configuration of the container 
from the LXC configuration file that fully manage this kind of network 
configuration giving the opportunity to provide shared hosting containers (what 
I'm looking for). 


Cordialement, 
Benoit

- Mail original -
De: "Serge Hallyn" 
À: "lxc-users" 
Envoyé: Lundi 29 Juin 2015 10:54:22
Objet: Re: [lxc-users] LXC - Best way to avoid networking changes in a container

Quoting Fajar A. Nugraha (l...@fajar.net): 
> On Fri, Jun 26, 2015 at 8:20 PM, Benoit GEORGELIN - Association 
> Web4all  wrote: 
> > Hi Fajar, 
> > 
> > If the container have this setting 
> > 
> > lxc.network.type = veth 
> > lxc.network.flags = up 
> > lxc.network.hwaddr = 00:16:3e:2e:51:17 
> > lxc.network.veth.pair = veth-cont1-0 
> > lxc.network.ipv4 = 209.126.100.172/32 
> > lxc.network.ipv4.gateway = 10.0.0.1 
> > 
> > 
> > And the root user in the container change the file /etc/network/interfaces 
> > to something else than 
> > 
> > iface eth0 inet manual 
> > 
> > Does the container configuration will be still the one used or the new ip 
> > address configured in the container will be talking to the network though 
> > the veth ? 
> 
> 
> The container config lines above makes lxc-start configure necessary 
> IP and routes. If the container has its own configuration, it will 
> override the current active ip/routes. 
> 
> If the container root user change its configuration (e.g 
> /etc/network/interfaces) to use the SAME IP/routes (like in my 
> previous link), it would obviously still work. 
> 
> If the container root user change it to use another container (e.g. 
> container B)'s IP address, then AFAIK the host will simply ignore it. 
> At least that what happens on my tests. 

If you really want to have the container not change its networking, I suppose 
you could either not grant it CAP_NET_ADMIN, or you could create a network 
namespace for the container, set it up, and then run the container inside 
that with 'lxc.network.type = none' in the container configuration. 

Otherwise, using ebtables/iptables to lock the container's veth to its mac 
and ip seem the best ways. It may be worth adding a new network_up hook 
which is sent the names of the host-side nics, and run from the host 
network namespace (obiously requiring root), to easily script setting these. 
___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ntpdate errors in vivid container

2015-06-29 Thread Serge Hallyn
Quoting Joe McDonald (ideafil...@gmail.com):
> host is Ubuntu 14.04.2 LTS
> container is Ubuntu 15.04 (vivid)
> lxc is 1.1.2
> 
> using bridging for networking.
> container /etc/network/interfaces looks like:
> auto lo
> iface lo inet loopback
> 
> auto eth0
> iface eth0 inet static

Why are you using static here?  Have you set up the ip address
using the container configuration file?  The problem with that
is that unless you've done it manually, dns won't be set up so
ntp would still fail.

> bringing it up, everything works fine, but there is a 2 minute delay in the
> bootup process.  delay from (i think bringing up network calls ntpdate):
> 
> Jun 26 12:20:32 vivweb ntpdate[611]: Can't adjust the time of day:
> Operation not permitted
> Jun 26 12:22:25 vivweb systemd[1]: ifup-wait-all-auto.service start
> operation timed out. Terminating.
> Jun 26 12:22:25 vivweb systemd[1]: Failed to start Wait for all "auto"
> /etc/network/interfaces to be up for network-online.target.
> Jun 26 12:22:25 vivweb systemd[1]: Unit ifup-wait-all-auto.service entered
> failed state.
> Jun 26 12:22:25 vivweb systemd[1]: ifup-wait-all-auto.service failed.
> 
> other errors:
> Jun 26 12:22:25 vivweb systemd[1]: Failed to reset devices.list on
> /lxc/vivweb/system.slice/networking.service: Permission denied
> Jun 26 12:22:26 vivweb systemd[1]: Failed to reset devices.list on
> /lxc/vivweb/system.slice/systemd-update-utmp-runlevel.service: Permission
> denied
> 
> any idea why these errors are occuring?

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC - Best way to avoid networking changes in a container

2015-06-29 Thread Serge Hallyn
Quoting Fajar A. Nugraha (l...@fajar.net):
> On Fri, Jun 26, 2015 at 8:20 PM, Benoit GEORGELIN - Association
> Web4all  wrote:
> > Hi Fajar,
> >
> > If the container have this setting
> >
> > lxc.network.type = veth
> > lxc.network.flags = up
> > lxc.network.hwaddr = 00:16:3e:2e:51:17
> > lxc.network.veth.pair = veth-cont1-0
> > lxc.network.ipv4 = 209.126.100.172/32
> > lxc.network.ipv4.gateway = 10.0.0.1
> >
> >
> > And the root user in the container change the file /etc/network/interfaces 
> > to something else than
> >
> > iface eth0 inet manual
> >
> > Does the container configuration will be still the one used or the new ip 
> > address configured in the container will be talking to the network though 
> > the veth ?
> 
> 
> The container config lines above makes lxc-start configure necessary
> IP and routes. If the container has its own configuration, it will
> override the current active ip/routes.
> 
> If the container root user change its configuration (e.g
> /etc/network/interfaces) to use the SAME IP/routes (like in my
> previous link), it would obviously still work.
> 
> If the container root user change it to use another container (e.g.
> container B)'s IP address, then AFAIK the host will simply ignore it.
> At least that what happens on my tests.

If you really want to have the container not change its networking, I suppose
you could either not grant it CAP_NET_ADMIN, or you could create a network
namespace for the container, set it up, and then run the container inside
that with 'lxc.network.type = none' in the container configuration.

Otherwise, using ebtables/iptables to lock the container's veth to its mac
and ip seem the best ways.  It may be worth adding a new network_up hook
which is sent the names of the host-side nics, and run from the host
network namespace (obiously requiring root), to easily script setting these.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] this would seem to be good news - proxmox adopting LXC for container based vm's

2015-06-29 Thread brian mullan
I haven't used Proxmox (yet) but was always interested in it because I've
seen so many posts over the past years about how great an environment it
was to utilize.

In the past I saw they utilized OpenVZ for any Proxmox container use.

They (proxmox) just announced they are switching to LXC..!

http://forum.proxmox.com/threads/22532-Proxmox-VE-4-0-beta1-released

Their link to "linux containers" on this page
 (see below) now points to
linxucontainers.org.

*Migrate container from OpenVZ to Linux container*

*NOTE:At the moment you must do it manually later it will work with backup
and restore. Make a Backup (use gzip) of the OpenVZ container. *

*Then copy the backup file in /var/lib/vz/template/cache/ *

*Now it is possible to create a new CT, using the backup as template. see
create container. Create_container
*
 *References*

*Wikipedia Linux Container *

*Linux Container *

*GIT Linux Container *
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC in CentOS Linux release 7.1.1503

2015-06-29 Thread Vishnu Kanth
I have created a container say base and I am trying to create a clone of
the base container with backing store as overlayfs. But it always fails
with the following error,

lxc_container: bdev.c: overlayfs_mount: 2237 No such device -
overlayfs: error mounting /var/lib/lxc/base/rootfs onto
/usr/lib64/lxc/rootfs options
upperdir=/var/lib/lxc/s0/delta0,lowerdir=/var/lib/lxc/base/rootfs,workdir=/var/lib/lxc/s0/olwork
clone failed


It's because the name of the overlay filesystem changed from overlayfs to
overlay at some point. Support for this was made after the release of lxc
1.0.7. Any one has tried LXC in Centos 7 and is there any work around for
this issue?

Regards,
Vishnu
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Get the ip address of the host that hosts a container

2015-06-29 Thread Thouraya TH
Hi all

Please, is there a command to get the ip address of the host that hosts a
container ?

i'd like to use this command inside the container.

Thanks a lot.
Best Regards.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Openstack

2015-06-29 Thread gustavo panizzo (gfa)


putting the list back in the loop

On 2015-06-26 22:42, Frederico Araujo wrote:

Here is the output of n-sch:

2015-06-26 09:21:58.396 DEBUG oslo_concurrency.lockutils
[req-a7c9846b-5df5-426d-8966-74b7839dc1a
6 None None] Lock "host_instance" acquired by "sync_instance_info" ::
waited 0.000s from (pid=229
88) inner
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:250
2015-06-26 09:21:58.397 INFO nova.scheduler.host_manager
[req-a7c9846b-5df5-426d-8966-74b7839dc1a
6 None None] Successfully synced instances from host 'devstack'.
2015-06-26 09:21:58.397 DEBUG oslo_concurrency.lockutils
[req-a7c9846b-5df5-426d-8966-74b7839dc1a
6 None None] Lock "host_instance" released by "sync_instance_info" ::
held 0.001s from (pid=22988
) inner
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:262
2015-06-26 09:22:05.123 DEBUG nova.filters
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin]
  Starting with 1 host(s) from (pid=22988) get_filtered_objects
/opt/stack/nova/nova/filters.py:70
2015-06-26 09:22:05.123 DEBUG nova.filters
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin]
  Filter RetryFilter returned 1 host(s) from (pid=22988)
get_filtered_objects /opt/stack/nova/nova
/filters.py:84
2015-06-26 09:22:05.123 DEBUG nova.filters
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin]
  Filter AvailabilityZoneFilter returned 1 host(s) from (pid=22988)
get_filtered_objects /opt/stac
k/nova/nova/filters.py:84
2015-06-26 09:22:05.123 DEBUG nova.filters
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin]
  Filter RamFilter returned 1 host(s) from (pid=22988)
get_filtered_objects /opt/stack/nova/nova/f
ilters.py:84
2015-06-26 09:22:05.124 DEBUG nova.filters
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin]
  Filter ComputeFilter returned 1 host(s) from (pid=22988)
get_filtered_objects /opt/stack/nova/no
va/filters.py:84
2015-06-26 09:22:05.124 DEBUG nova.filters
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin]
  Filter ComputeCapabilitiesFilter returned 1 host(s) from (pid=22988)
get_filtered_objects /opt/s
tack/nova/nova/filters.py:84
2015-06-26 09:22:05.124 DEBUG nova.filters
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin] Filter
ImagePropertiesFilter returned 1 host(s) from (pid=22988)
get_filtered_objects /opt/stack/nova/nova/filters.py:84
2015-06-26 09:22:05.124 DEBUG nova.filters
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin] Filter
ServerGroupAntiAffinityFilter returned 1 host(s) from (pid=22988)
get_filtered_objects /opt/stack/nova/nova/filters.py:84
2015-06-26 09:22:05.124 DEBUG nova.filters
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin] Filter
ServerGroupAffinityFilter returned 1 host(s) from (pid=22988)
get_filtered_objects /opt/stack/nova/nova/filters.py:84
2015-06-26 09:22:05.124 DEBUG nova.scheduler.filter_scheduler
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin] Filtered
[(devstack, devstack) ram:15588 disk:83968 io_ops:0 instances:0] from
(pid=22988) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:152
2015-06-26 09:22:05.125 DEBUG nova.scheduler.filter_scheduler
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin] Weighed
[WeighedHost [host: (devstack, devstack) ram:15588 disk:83968 io_ops:0
instances:0, weight: 0.0]] from (pid=22988) _schedule
/opt/stack/nova/nova/scheduler/filter_scheduler.py:157
2015-06-26 09:22:05.125 DEBUG nova.scheduler.filter_scheduler
[req-4903b39f-2b83-4c05-ace6-74319784e85d admin admin] Selected host:
WeighedHost [host: (devstack, devstack) ram:15588 disk:83968 io_ops:0
instances:0, weight: 0.0] from (pid=22988) _schedule
/opt/stack/nova/nova/scheduler/filter_scheduler.py:167
2015-06-26 09:22:08.803 DEBUG nova.openstack.common.periodic_task
[req-fb9fc0e8-daef-4e32-843d-a2d0a93b0a21 None None] Running periodic
task SchedulerManager._expire_reservations from (pid=22988)
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:219
2015-06-26 09:22:08.807 DEBUG nova.openstack.common.loopingcall
[req-fb9fc0e8-daef-4e32-843d-a2d0a93b0a21 None None] Dynamic looping
call > sleeping for 5.21 seconds from (pid=22988)
_inner /opt/stack/nova/nova/openstack/common/loopingcall.py:132
2015-06-26 09:22:14.019 DEBUG nova.openstack.common.periodic_task
[req-fb9fc0e8-daef-4e32-843d-a2d0a93b0a21 None None] Running periodic
task SchedulerManager._run_periodic_tasks from (pid=22988)
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:219
2015-06-26 09:22:14.019 DEBUG nova.openstack.common.loopingcall
[req-fb9fc0e8-daef-4e32-843d-a2d0a93b0a21 None None] Dynamic looping
call > sleeping for 55.78 seconds from (pid=22988)
_inner /opt/stack/nova/nova/openstack/common/loopingcall.py:132
2015-06-26 09:23:09.803 DEBUG nova.openstack.common.periodic_task
[req-fb9fc0e8-daef-4e32-843d-a2d0a93b0a21 None None] Running periodic
task SchedulerManager._expire_reservations from (pid=22988)
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:219
2015-06-26 0