Re: [lxc-users] What are the options for connecting to storage?

2016-11-23 Thread McDonagh, Ed
So I gave up with NFS, as I couldn’t get the permissions right such that 
Windows active directory users using the same Synology share via CIFS could 
manipulate the objects created on the container.

With a Samba mount, I could set the username in the mount command and all is 
well!

The mount is passed to the unprivileged container with
lxc config device add c1 sdb disk source=/mnt/nasshare path=/mnt/nasshare

I have set up the same mount on the other server, with the same credentials and 
path; now I can move the container from one host to the other, and the share is 
available without any further effort. I haven’t tried live migration, as it 
doesn’t work on my servers.

Ed

From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Ron Kelley
Sent: 17 November 2016 15:46
To: LXC users mailing-list
Subject: Re: [lxc-users] What are the options for connecting to storage?

Hi Ed,

Not sure how well that will work unless your unprivileged containers have the 
same UID/GID between your LXD servers.  Looking forward to your test results :-)


-Ron


On Nov 17, 2016, at 10:11 AM, McDonagh, Ed 
<ed.mcdon...@rmh.nhs.uk<mailto:ed.mcdon...@rmh.nhs.uk>> wrote:

Thanks Ron. Thinking my requirements through again, I’ve decided to scrap the 
iSCSI mount and use NFS/CIFS instead. So following your pointer I have NFS 
mounted in the host, and then bind-mounted that to the container. I’ve yet to 
see how that migrates, but one bridge at a time!

Thanks again.

Kind regards

Ed


From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Ron Kelley
Sent: 16 November 2016 19:49
To: LXC users mailing-list
Subject: Re: [lxc-users] What are the options for connecting to storage?

What about using a bind-mount?  Mount your iscsi volume to the LXD server and 
bind-mount it into the container.  A quick URL:  
https://github.com/lxc/lxd/issues/2005






On Nov 16, 2016, at 11:42 AM, McDonagh, Ed 
<ed.mcdon...@rmh.nhs.uk<mailto:ed.mcdon...@rmh.nhs.uk>> wrote:

Hi

I need to create a container that has access to a couple of TB to store image 
files (mostly 8MB upwards). My instinct is to create an iSCSI target on my 
Synology server and connect from the container to get a new disk that I can use 
for the storage.

I understand that the guest has to be privileged to do any sort of connection 
to storage, however it seems that the iSCSI node in the container doesn’t work.

What are my options? Do I need to use a SMB connection to the Synology server? 
Or is NFS better? Is there a way to connect using iSCSI?

If it makes any difference, I have two identical servers that I am running LXD 
on, and want to be able to move the containers between them for maintenance 
etc. Ideally live, but not essential. And that doesn’t seem to work anyway with 
lxd 2.0, so kind of a moot point!

Any help, suggestions or advice would be very welcome.

Kind regards

Ed

Ed McDonagh
Head of Scientific Computing (Diagnostic Radiology)
Joint Department of Physics
The Royal Marsden NHS Foundation Trust
Tel 020 7808 2512
Fax 020 7808 2522




Attention:
This e-mail and any attachment is for authorised use by the intended 
recipient(s) only. It may contain proprietary, confidential and/or privileged 
information and should not be copied, disclosed, distributed, retained or used 
by any other party. If you are not an intended recipient please notify the 
sender immediately and delete this e-mail (including attachments and copies).
The statements and opinions expressed in this e-mail are those of the author 
and do not necessarily reflect those of the Royal Marsden NHS Foundation Trust. 
The Trust does not take any responsibility for the statements and opinions of 
the author.
Website: http://www.royalmarsden.nhs.uk<http://www.royalmarsden.nhs.uk/>




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users



Attention:
This e-mail and any attachment is for authorised use by the intended 
recipient(s) only. It may contain proprietary, confidential and/or privileged 
information and should not be copied, disclosed, distributed, retained or used 
by any other party. If you are not an intended recipient please notify the 
sender immediately and delete this e-mail (including attachments and copies).
The statements and opinions expressed in this e-mail are those of the author 
and do not necessarily reflect those of the Royal Marsden NHS Foundation Trust. 
The Trust does not take any responsibility for the statements and opinions of 
the author.
Website: http://www.royalmarsden.nhs.uk<http://www.royalmarsden.nhs.uk/>




___
lxc-users mailing list
lxc-users@

Re: [lxc-users] What are the options for connecting to storage?

2016-11-17 Thread McDonagh, Ed
Thanks Ron. Thinking my requirements through again, I’ve decided to scrap the 
iSCSI mount and use NFS/CIFS instead. So following your pointer I have NFS 
mounted in the host, and then bind-mounted that to the container. I’ve yet to 
see how that migrates, but one bridge at a time!

Thanks again.

Kind regards

Ed


From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Ron Kelley
Sent: 16 November 2016 19:49
To: LXC users mailing-list
Subject: Re: [lxc-users] What are the options for connecting to storage?

What about using a bind-mount?  Mount your iscsi volume to the LXD server and 
bind-mount it into the container.  A quick URL:  
https://github.com/lxc/lxd/issues/2005






On Nov 16, 2016, at 11:42 AM, McDonagh, Ed 
<ed.mcdon...@rmh.nhs.uk<mailto:ed.mcdon...@rmh.nhs.uk>> wrote:

Hi

I need to create a container that has access to a couple of TB to store image 
files (mostly 8MB upwards). My instinct is to create an iSCSI target on my 
Synology server and connect from the container to get a new disk that I can use 
for the storage.

I understand that the guest has to be privileged to do any sort of connection 
to storage, however it seems that the iSCSI node in the container doesn’t work.

What are my options? Do I need to use a SMB connection to the Synology server? 
Or is NFS better? Is there a way to connect using iSCSI?

If it makes any difference, I have two identical servers that I am running LXD 
on, and want to be able to move the containers between them for maintenance 
etc. Ideally live, but not essential. And that doesn’t seem to work anyway with 
lxd 2.0, so kind of a moot point!

Any help, suggestions or advice would be very welcome.

Kind regards

Ed

Ed McDonagh
Head of Scientific Computing (Diagnostic Radiology)
Joint Department of Physics
The Royal Marsden NHS Foundation Trust
Tel 020 7808 2512
Fax 020 7808 2522




Attention:
This e-mail and any attachment is for authorised use by the intended 
recipient(s) only. It may contain proprietary, confidential and/or privileged 
information and should not be copied, disclosed, distributed, retained or used 
by any other party. If you are not an intended recipient please notify the 
sender immediately and delete this e-mail (including attachments and copies).
The statements and opinions expressed in this e-mail are those of the author 
and do not necessarily reflect those of the Royal Marsden NHS Foundation Trust. 
The Trust does not take any responsibility for the statements and opinions of 
the author.
Website: http://www.royalmarsden.nhs.uk<http://www.royalmarsden.nhs.uk/>




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users


#
Attention:
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary, confidential and/or
privileged information and should not be copied, disclosed, distributed,
retained or used by any other party. If you are not an intended recipient
please notify the sender immediately and delete this e-mail (including
attachments and copies). 

The statements and opinions expressed in this e-mail are those of the
author and do not necessarily reflect those of the Royal Marsden NHS
Foundation Trust. The Trust does not take any responsibility for the
statements and opinions of the author.

Website: http://www.royalmarsden.nhs.uk
#
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] What are the options for connecting to storage?

2016-11-16 Thread McDonagh, Ed
Hi

I need to create a container that has access to a couple of TB to store image 
files (mostly 8MB upwards). My instinct is to create an iSCSI target on my 
Synology server and connect from the container to get a new disk that I can use 
for the storage.

I understand that the guest has to be privileged to do any sort of connection 
to storage, however it seems that the iSCSI node in the container doesn't work.

What are my options? Do I need to use a SMB connection to the Synology server? 
Or is NFS better? Is there a way to connect using iSCSI?

If it makes any difference, I have two identical servers that I am running LXD 
on, and want to be able to move the containers between them for maintenance 
etc. Ideally live, but not essential. And that doesn't seem to work anyway with 
lxd 2.0, so kind of a moot point!

Any help, suggestions or advice would be very welcome.

Kind regards

Ed

Ed McDonagh
Head of Scientific Computing (Diagnostic Radiology)
Joint Department of Physics
The Royal Marsden NHS Foundation Trust
Tel 020 7808 2512
Fax 020 7808 2522
[Description: cid:452082411@17082012-1926]


#
Attention:
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary, confidential and/or
privileged information and should not be copied, disclosed, distributed,
retained or used by any other party. If you are not an intended recipient
please notify the sender immediately and delete this e-mail (including
attachments and copies). 

The statements and opinions expressed in this e-mail are those of the
author and do not necessarily reflect those of the Royal Marsden NHS
Foundation Trust. The Trust does not take any responsibility for the
statements and opinions of the author.

Website: http://www.royalmarsden.nhs.uk
#
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Live migration from UEFI to BIOS host

2016-10-24 Thread McDonagh, Ed
Dear experts

With lxd 2.0.5, I can now take stateful snapshots which I guess confirms 
closing of https://bugs.launchpad.net/ubuntu/+source/criu/+bug/1626100, though 
I haven't tested it thoroughly.

I now have the following error when trying to do a live migration from h2 (a 
UEFI booted host, I now discover) to h1 (a BIOS booted host):

error: migration restore failed
(00.008782) Warn  (cr-restore.c:1159): Set CLONE_PARENT | CLONE_NEWPID but it 
might cause restore problem,because not all kernels support such clone flags 
combinations!
(00.298284)  1: Error (mount.c:2406): mnt: Can't mount at 
./sys/firmware/efi/efivars: No such file or directory
(00.298292)  1: Error (mount.c:2555): mnt: Unable to statfs 
./sys/firmware/efi/efivars: No such file or directory
(00.314227) Error (cr-restore.c:1352): 5857 killed by signal 9
(00.358573) Error (cr-restore.c:2182): Restoring FAILED.

I am assuming the issue is that h1 does not have a /sys/firmware/efi folder.

Is it not possible to migrate in these circumstances?

Thanks in advance.

Ed

#
Attention:
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary, confidential and/or
privileged information and should not be copied, disclosed, distributed,
retained or used by any other party. If you are not an intended recipient
please notify the sender immediately and delete this e-mail (including
attachments and copies). 

The statements and opinions expressed in this e-mail are those of the
author and do not necessarily reflect those of the Royal Marsden NHS
Foundation Trust. The Trust does not take any responsibility for the
statements and opinions of the author.

Website: http://www.royalmarsden.nhs.uk
#
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not on others

2016-10-20 Thread McDonagh, Ed
Yes, the naming convention they have chosen makes looking for lxd information 
very difficult when the commands, file paths and conventions are different to 
the lxc predecessor/underlying programs.

But I think that has been discussed on this list before!

From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Michael Peek
Sent: 20 October 2016 19:07
To: LXC users mailing-list
Subject: Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not 
on others

I started out with lxd but ran into problems when I tried googling for 
instructions on how to assign a static IP address to a container.  All of the 
instructions I found said to edit /var/lib/lxc//config, and there 
was no /var/lib/lxc/ directory.  So I uninstalled lxd and installed lxc instead.

Michael



On 10/20/2016 01:59 PM, McDonagh, Ed wrote:
I’m very much not a guru on this topic, but it strikes me to check if you are 
using lxc for a reason, rather than using lxd?

If you don’t have a legacy reason, I don’t know why you wouldn’t want to use 
the new tools that are being developed.

From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Michael Peek
Sent: 20 October 2016 18:57
To: LXC users mailing-list
Subject: Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not 
on others

There is no lxc-profile command, and lxc-ls and lxc-info don't list anything 
about a profile.  I have no idea if that means anything.

Michael



On 10/20/2016 01:53 PM, McDonagh, Ed wrote:
It will be the new lxc software, but without the niceness of the lxd interface.

Hence all his commands being lxc-command instead of lxc command

From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Ron Kelley
Sent: 20 October 2016 18:51
To: LXC users mailing-list
Subject: Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not 
on others

hmmm, seems you are running the “original” version of lxc and not the new 
lxc/lxd software.  Please ignore my comments then…


On Oct 20, 2016, at 1:47 PM, Michael Peek 
<p...@nimbios.org<mailto:p...@nimbios.org>> wrote:

# lxc profile show
The program 'lxc' is currently not installed. You can install it by typing:
apt install lxd-client

Maybe that's part of the problem?  Am I missing a package?  Here are the 
packages I have installed for lxc:

# dpkg -l | grep lxc | cut -c1-20
ii  liblxc1
ii  lxc
ii  lxc-common
ii  lxc-templates
ii  lxc1
ii  lxcfs
ii  python3-lxc

Michael

On 10/20/2016 01:43 PM, Ron Kelley wrote:
"lxc profile show”.  Usually, you have a default profile that gets applied to 
your container unless you have created a new/custom profile.





On Oct 20, 2016, at 1:41 PM, Michael Peek 
<p...@nimbios.org<mailto:p...@nimbios.org>> wrote:

How do I tell?

Michael



On 10/20/2016 01:35 PM, Ron Kelley wrote:
What profile(s) are you using for your LXC containers?



On Oct 20, 2016, at 1:33 PM, Michael Peek 
<p...@nimbios.org<mailto:p...@nimbios.org>> wrote:

Hi gurus,

I'm scratching my head again.  I'm using the following commands to create an 
LXC container with a static IP address:
# lxc-create -n my-container-1 -t download -- -d ubuntu -r xenial -a amd64

# vi /var/lib/lxc/my-container-1/config

Change:
# Network configuration
# lxc.network.type = veth
# lxc.network.link = lxcbr0
# lxc.network.flags = up
# lxc.network.hwaddr = 00:16:3e:0d:ec:13
lxc.network.type = macvlan
lxc.network.link = eno1

# vi /var/lib/lxc/my-container-1/rootfs/etc/network/interfaces

Change:
#iface eth0 inet dhcp
iface eth0 inet static
  address xxx.xxx.xxx.4
  netmask 255.255.255.0
  network xxx.xxx.xxx.0
  broadcast xxx.xxx.xxx.255
  gateway xxx.xxx.xxx.1
  dns-nameservers xxx.xxx.0.66 xxx.xxx.128.66 8.8.8.8
  dns-search my.domain

# lxc-start -n my-container-1 -d

It failed to work.  I reviewed my notes from past posts to the list but found 
no discrepancies.  So I deleted the container and tried it on another host -- 
and it worked.  Next I deleted that container and went back to the first host, 
and it failed.  Lastly, I tried the above steps on multiple hosts and found 
that it works fine on some hosts, but not on others, and I have no idea why.  
On hosts where this fails there are no error messages, but the container can't 
access the network, and nothing on the network can access the container.

Is there some step that I'm missing?

Thanks for any help,

Michael Peek
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users







___

lxc-users mailing list

lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>

http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcont

Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not on others

2016-10-20 Thread McDonagh, Ed
I’m very much not a guru on this topic, but it strikes me to check if you are 
using lxc for a reason, rather than using lxd?

If you don’t have a legacy reason, I don’t know why you wouldn’t want to use 
the new tools that are being developed.

From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Michael Peek
Sent: 20 October 2016 18:57
To: LXC users mailing-list
Subject: Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not 
on others

There is no lxc-profile command, and lxc-ls and lxc-info don't list anything 
about a profile.  I have no idea if that means anything.

Michael



On 10/20/2016 01:53 PM, McDonagh, Ed wrote:
It will be the new lxc software, but without the niceness of the lxd interface.

Hence all his commands being lxc-command instead of lxc command

From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Ron Kelley
Sent: 20 October 2016 18:51
To: LXC users mailing-list
Subject: Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not 
on others

hmmm, seems you are running the “original” version of lxc and not the new 
lxc/lxd software.  Please ignore my comments then…


On Oct 20, 2016, at 1:47 PM, Michael Peek 
<p...@nimbios.org<mailto:p...@nimbios.org>> wrote:

# lxc profile show
The program 'lxc' is currently not installed. You can install it by typing:
apt install lxd-client

Maybe that's part of the problem?  Am I missing a package?  Here are the 
packages I have installed for lxc:

# dpkg -l | grep lxc | cut -c1-20
ii  liblxc1
ii  lxc
ii  lxc-common
ii  lxc-templates
ii  lxc1
ii  lxcfs
ii  python3-lxc

Michael

On 10/20/2016 01:43 PM, Ron Kelley wrote:
"lxc profile show”.  Usually, you have a default profile that gets applied to 
your container unless you have created a new/custom profile.





On Oct 20, 2016, at 1:41 PM, Michael Peek 
<p...@nimbios.org<mailto:p...@nimbios.org>> wrote:

How do I tell?

Michael



On 10/20/2016 01:35 PM, Ron Kelley wrote:
What profile(s) are you using for your LXC containers?



On Oct 20, 2016, at 1:33 PM, Michael Peek 
<p...@nimbios.org<mailto:p...@nimbios.org>> wrote:

Hi gurus,

I'm scratching my head again.  I'm using the following commands to create an 
LXC container with a static IP address:
# lxc-create -n my-container-1 -t download -- -d ubuntu -r xenial -a amd64

# vi /var/lib/lxc/my-container-1/config

Change:
# Network configuration
# lxc.network.type = veth
# lxc.network.link = lxcbr0
# lxc.network.flags = up
# lxc.network.hwaddr = 00:16:3e:0d:ec:13
lxc.network.type = macvlan
lxc.network.link = eno1

# vi /var/lib/lxc/my-container-1/rootfs/etc/network/interfaces

Change:
#iface eth0 inet dhcp
iface eth0 inet static
  address xxx.xxx.xxx.4
  netmask 255.255.255.0
  network xxx.xxx.xxx.0
  broadcast xxx.xxx.xxx.255
  gateway xxx.xxx.xxx.1
  dns-nameservers xxx.xxx.0.66 xxx.xxx.128.66 8.8.8.8
  dns-search my.domain

# lxc-start -n my-container-1 -d

It failed to work.  I reviewed my notes from past posts to the list but found 
no discrepancies.  So I deleted the container and tried it on another host -- 
and it worked.  Next I deleted that container and went back to the first host, 
and it failed.  Lastly, I tried the above steps on multiple hosts and found 
that it works fine on some hosts, but not on others, and I have no idea why.  
On hosts where this fails there are no error messages, but the container can't 
access the network, and nothing on the network can access the container.

Is there some step that I'm missing?

Thanks for any help,

Michael Peek
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users






___

lxc-users mailing list

lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>

http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users






___

lxc-users mailing list

lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>

http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users


Attention:
This e-mail and any attachment is for authorised use by the intended 
recipient(s) only. It may contain proprietary, confidential and/or privileged 
information and should not be copied, disclosed, distributed, retained or used 
by any other party. If you are not an int

[lxc-users] LXD 2.0.5 release - when will it hit Xenial-updates?

2016-10-17 Thread McDonagh, Ed
Dear list/Stéphane

LXD 2.0.5 was released nearly two weeks ago - when is it due to hit 
xenial-updates? I'm assuming that the yakkety release has caused the delay?

I have a bug that was closed on the premise that the fix was in 2.0.5, and so 
I'm keen to install it; but I don't want to go to xenial-proposed.

Any idea of when it might come?

Kind regards

Ed

P.S. I held off till significantly more than the first week, based on the 
previous two releases!

#
Attention:
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary, confidential and/or
privileged information and should not be copied, disclosed, distributed,
retained or used by any other party. If you are not an intended recipient
please notify the sender immediately and delete this e-mail (including
attachments and copies). 

The statements and opinions expressed in this e-mail are those of the
author and do not necessarily reflect those of the Royal Marsden NHS
Foundation Trust. The Trust does not take any responsibility for the
statements and opinions of the author.

Website: http://www.royalmarsden.nhs.uk
#
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC 2.0.4, LXCFS 2.0.3 and LXD 2.0.4 have been released!

2016-08-19 Thread McDonagh, Ed
Quoting from Stéphane’s response to the same question from me for the last 
release…


‘It takes a week for something to go from proposed to updates, that's to allow 
getting early feedback on any kind of regression.’

From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Steven Spencer
Sent: 19 August 2016 14:39
To: LXC users mailing-list
Subject: Re: [lxc-users] LXC 2.0.4, LXCFS 2.0.3 and LXD 2.0.4 have been 
released!

Stéphane,

I was wondering why the Xenial upstream doesn't appear to have 2.0.4 yet. I 
have an couple of LXD instances running Ubuntu 16.04 and an sudo apt-get update 
&& sudo apt-get upgrade returns no new update to LXD.

Thanks,
Steve

On Mon, Aug 15, 2016 at 10:39 PM, Stéphane Graber 
> wrote:
Hello everyone,

Today the LXC project is pleased to announce the release of:
 - LXC 2.0.4
 - LXD 2.0.4
 - LXCFS 2.0.3

They each contain the accumulated bugfixes since the previous round of
bugfix releases a bit over a month ago.

The detailed changelogs can be found at:
 - https://linuxcontainers.org/lxc/news/
 - https://linuxcontainers.org/lxcfs/news/
 - https://linuxcontainers.org/lxd/news/

As a reminder, the 2.0 series of all of those is supported for bugfix
and security updates up until June 2021.

Thanks to everyone who contributed to those projects and helped make
this possible!


Stéphane Graber
On behalf of the LXC, LXCFS and LXD development teams

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


#
Attention:
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary, confidential and/or
privileged information and should not be copied, disclosed, distributed,
retained or used by any other party. If you are not an intended recipient
please notify the sender immediately and delete this e-mail (including
attachments and copies). 

The statements and opinions expressed in this e-mail are those of the
author and do not necessarily reflect those of the Royal Marsden NHS
Foundation Trust. The Trust does not take any responsibility for the
statements and opinions of the author.

Website: http://www.royalmarsden.nhs.uk
#
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC 2.0.2 & 2.0.3, LXCFS 2.0.2 and LXD 2.0.3 have been released!

2016-07-04 Thread McDonagh, Ed
Hello

Is there a timeline for LXC/LXD 2.0.3 to be released to xenial-updates? I saw 
it was released to trusty-backports and yakkety on Thursday or Friday last 
week, but nothing for xenial.

Or do I need to add the PPA to keep up to date?

Kind regards

Ed

-Original Message-
From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Stéphane Graber
Sent: 29 June 2016 00:27
To: lxc-de...@lists.linuxcontainers.org; lxc-users@lists.linuxcontainers.org; 
contain...@lists.linux-foundation.org
Subject: [lxc-users] LXC 2.0.2 & 2.0.3, LXCFS 2.0.2 and LXD 2.0.3 have been 
released!

Hello everyone,

Today the LXC project is pleased to announce the release of:
 - LXC 2.0.2 & 2.0.3
 - LXD 2.0.3
 - LXCFS 2.0.2

We had to release two LXC bugfix releases due to a problem in the apparmor 
profile which was included in 2.0.2. Fixing the apparmor profile is the only 
change in 2.0.3.


They each contain the accumulated bugfixes since the previous round of bugfix 
releases a bit over a month ago.

The detailed changelogs can be found at:
 - https://linuxcontainers.org/lxc/news/
 - https://linuxcontainers.org/lxcfs/news/
 - https://linuxcontainers.org/lxd/news/

As a reminder, the 2.0 series of all of those is supported for bugfix and 
security updates up until June 2021.

Thanks to everyone who contributed to those projects and helped make this 
possible!


Stéphane Graber
On behalf of the LXC, LXCFS and LXD development teams
#
Attention:
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary, confidential and/or
privileged information and should not be copied, disclosed, distributed,
retained or used by any other party. If you are not an intended recipient
please notify the sender immediately and delete this e-mail (including
attachments and copies). 

The statements and opinions expressed in this e-mail are those of the
author and do not necessarily reflect those of the Royal Marsden NHS
Foundation Trust. The Trust does not take any responsibility for the
statements and opinions of the author.

Website: http://www.royalmarsden.nhs.uk
#
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Failure to start container stopped in stateful mode

2016-06-24 Thread McDonagh, Ed
Hello again

I stopped one of my containers using the flag --stateful, and it now will not 
restart, having the log reproduced below.

This container was running some django services - I have since successfully 
stopped and started basic containers with no services running with the 
--stateful flag.

Is there an obvious reason for the failure, and it likely that I will be able 
to resurrect the container?

The following is the end of the log 'snapshot_restore_2016log'


00.127599)  1: mnt: 285:./sys/kernel/debug private 0 shared 0 slave 1
(00.127617)  1: mnt:Mounting tracefs @./sys/kernel/debug/tracing (0)
(00.127630)  1: mnt:Bind /sys/kernel/debug/tracing/ to 
./sys/kernel/debug/tracing
(00.127657)  1: Error (mount.c:2406): mnt: Can't mount at 
./sys/kernel/debug/tracing: Permission denied
(00.145927) Error (cr-restore.c:1352): 20283 killed by signal 9
(00.185918) Switching to new ns to clean ghosts
(00.210243) uns: calling exit_usernsd (-1, 1)
(00.210345) uns: daemon calls 0x4523c0 (20278, -1, 1)
(00.210389) uns: `- daemon exits w/ 0
(00.210799) uns: daemon stopped
(00.210842) Error (cr-restore.c:2182): Restoring FAILED.


Kind regards
Ed

---
lxc info --show-log frp-dose02:

Name: frp-dose02
Architecture: x86_64
Status: Stopped
Type: persistent
Profiles: bridged

Log:

lxc 20160624153914.283 INFO lxc_lsm - lsm/lsm.c:lsm_init:48 - 
LSM security driver AppArmor
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .reject_force_umount  # comment 
this to allow umount -f;  not recommended.
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for reject_force_umount 
action 0
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts

lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for reject_force_umount 
action 0
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts

lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .[all].
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .kexec_load errno 1.
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for kexec_load action 327681
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for kexec_load action 327681
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .open_by_handle_at errno 1.
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for open_by_handle_at action 
327681
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for open_by_handle_at action 
327681
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .init_module errno 1.
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for init_module action 327681
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for init_module action 327681
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .finit_module errno 1.
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for finit_module action 
327681
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for finit_module action 
327681
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .delete_module errno 1.
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for delete_module action 
327681
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for delete_module action 
327681
lxc 20160624153914.284 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:456 - Merging in the compat seccomp ctx into the main 
one
lxc 20160624153914.284 INFO lxc_conf - 
conf.c:run_script_argv:367 - Executing script '/usr/bin/lxd callhook 
/var/lib/lxd 10 start' for container 'frp-dose02', config section 'lxc'
lxc 20160624153914.315 DEBUGlxc_start - 
start.c:setup_signal_fd:289 - sigchild handler set
lxc 20160624153914.315 DEBUGlxc_console - 
console.c:lxc_console_peer_default:469 - no console peer
lxc 20160624153914.315 INFO lxc_start - 

Re: [lxc-users] Live migration mkdtemp failure

2016-06-23 Thread McDonagh, Ed
Thanks Jake

Is it fixed in the standard repositories, or in the PPA?

From: jjs - mainphrame [mailto:j...@mainphrame.com]
Sent: 21 June 2016 17:27
To: LXC users mailing-list
Subject: Re: [lxc-users] Live migration mkdtemp failure

That particular error was resolved, but the lxc live migration doesn't work for 
a different reason now. We now get an error that says "can't dump ghost file" 
because of apparent size limitations - a limit less than the size of any lxc 
container we have running here.

(In contrast, live migration on all of our Openvz 7 containers works reliably)

Jake




On Tue, Jun 21, 2016 at 4:19 AM, McDonagh, Ed 
<ed.mcdon...@rmh.nhs.uk<mailto:ed.mcdon...@rmh.nhs.uk>> wrote:


> On Tue, Mar 29, 2016 at 09:30:19AM -0700, jjs - mainphrame wrote:
> > On Tue, Mar 29, 2016 at 7:18 AM, Tycho Andersen <
> > tycho.andersen at canonical.com<http://canonical.com>> wrote:
> >
> > > On Mon, Mar 28, 2016 at 08:47:24PM -0700, jjs - mainphrame wrote:
> >>  > I've looked at ct migration between 2 ubuntu 16.04 hosts today, and had
> > > > some interesting problems;  I find that migration of stopped containers
> > > > works fairly reliably; but live migration, well, it transfers a lot of
> > > > data, then exits with a failure message. I can then move the same
> > > > container, stopped, with no problem.
> > > >
> > > > The error is the same every time, a failure of "mkdtemp" -
> > >
> > > It looks like your host /tmp isn't writable by the uid map that the
> > > container is being restored as?
> > >
> >
> > Which is odd, since /tmp has 1777 perms on both hosts, so I don't see how
> > it could be a permissions problem. Surely the default apparmor profile is
> > not the cause? You did give me a new idea though, and I'll set up a test
> > with privileged containers for comparison. Is there a switch to enable
> > verbose logging?
>
> It already is enabled, you can find the full logs in
> /var/log/lxd/$container/migration_*
>
> Perhaps the pwd of the CRIU task is what's broken instead, since CRIU
> isn't supplying a full mkdtemp template. I'll have a deeper look in a
> bit.
>
> Tycho
>
> >
> > > >
> > > > root at ronnie:~# lxc move third lxd:
> > > > error: Error transferring container data: restore failed:
> > > > (00.033172)  1: Error (cr-restore.c:1489): mkdtemp failed
> > > > crtools-proc.x9p5OH: Permission denied
> > > > (00.060072) Error (cr-restore.c:1352): 9188 killed by signal 9
> > > > (00.117126) Error (cr-restore.c:2182): Restoring FAILED.

I've been getting the same error - was the issue ever resolved for 
non-privileged containers?

Kind regards
Ed
#
Attention:
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary, confidential and/or
privileged information and should not be copied, disclosed, distributed,
retained or used by any other party. If you are not an intended recipient
please notify the sender immediately and delete this e-mail (including
attachments and copies).

The statements and opinions expressed in this e-mail are those of the
author and do not necessarily reflect those of the Royal Marsden NHS
Foundation Trust. The Trust does not take any responsibility for the
statements and opinions of the author.

Website: http://www.royalmarsden.nhs.uk
#
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users


#
Attention:
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary, confidential and/or
privileged information and should not be copied, disclosed, distributed,
retained or used by any other party. If you are not an intended recipient
please notify the sender immediately and delete this e-mail (including
attachments and copies). 

The statements and opinions expressed in this e-mail are those of the
author and do not necessarily reflect those of the Royal Marsden NHS
Foundation Trust. The Trust does not take any responsibility for the
statements and opinions of the author.

Website: http://www.royalmarsden.nhs.uk
#
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Live migration mkdtemp failure

2016-06-21 Thread McDonagh, Ed


> On Tue, Mar 29, 2016 at 09:30:19AM -0700, jjs - mainphrame wrote:
> > On Tue, Mar 29, 2016 at 7:18 AM, Tycho Andersen <
> > tycho.andersen at canonical.com> wrote:
> >
> > > On Mon, Mar 28, 2016 at 08:47:24PM -0700, jjs - mainphrame wrote:
> >>  > I've looked at ct migration between 2 ubuntu 16.04 hosts today, and had
> > > > some interesting problems;  I find that migration of stopped containers
> > > > works fairly reliably; but live migration, well, it transfers a lot of
> > > > data, then exits with a failure message. I can then move the same
> > > > container, stopped, with no problem.
> > > >
> > > > The error is the same every time, a failure of "mkdtemp" -
> > >
> > > It looks like your host /tmp isn't writable by the uid map that the
> > > container is being restored as?
> > >
> >
> > Which is odd, since /tmp has 1777 perms on both hosts, so I don't see how
> > it could be a permissions problem. Surely the default apparmor profile is
> > not the cause? You did give me a new idea though, and I'll set up a test
> > with privileged containers for comparison. Is there a switch to enable
> > verbose logging?
>
> It already is enabled, you can find the full logs in
> /var/log/lxd/$container/migration_*
>
> Perhaps the pwd of the CRIU task is what's broken instead, since CRIU
> isn't supplying a full mkdtemp template. I'll have a deeper look in a
> bit.
>
> Tycho
>
> > 
> > > >
> > > > root at ronnie:~# lxc move third lxd:
> > > > error: Error transferring container data: restore failed:
> > > > (00.033172)  1: Error (cr-restore.c:1489): mkdtemp failed
> > > > crtools-proc.x9p5OH: Permission denied
> > > > (00.060072) Error (cr-restore.c:1352): 9188 killed by signal 9
> > > > (00.117126) Error (cr-restore.c:2182): Restoring FAILED.

I've been getting the same error - was the issue ever resolved for 
non-privileged containers?

Kind regards
Ed
#
Attention:
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary, confidential and/or
privileged information and should not be copied, disclosed, distributed,
retained or used by any other party. If you are not an intended recipient
please notify the sender immediately and delete this e-mail (including
attachments and copies). 

The statements and opinions expressed in this e-mail are those of the
author and do not necessarily reflect those of the Royal Marsden NHS
Foundation Trust. The Trust does not take any responsibility for the
statements and opinions of the author.

Website: http://www.royalmarsden.nhs.uk
#
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users