Re: [lxc-users] LXD - Production Hardware Guide

2020-06-07 Thread Steven Spencer
Thanks Stéphane!

On Fri, Jun 5, 2020 at 9:42 PM Stéphane Graber 
wrote:

> ZFS very much prefers seeing the individual physical disks.
> When dealing directly with full disks, it can do quite a bit to
> monitor them and handle issues.
> If it's placed behind RAID, whether hardware or software backed, all
> that information disappears and it doesn't really know whether to
> retry an operation or consider the disk to be bad.
>
> The same is true if you ever end up using Ceph where you want a 1:1
> mapping between physical disk and OSDs, so in general I'd recommend
> against hardware RAID at this point.
>
> Stéphane
>
> On Fri, Jun 5, 2020 at 4:59 PM Steven Spencer 
> wrote:
> >
> > Andrey and List,
> >
> > Thanks so much for your response, and we understand all of that. We know
> that if we have 3 containers the server configuration is going to be
> different than if we have 30 or 100, and that we will have to size RAM,
> Processors, etc, accordingly. What I think we are more interested in is: If
> we use ZFS, is there a recommended way to use it? Should we use RAID of any
> kind? If so, should it be hardware or software RAID? We realize, too, that
> we will need to size our drives according to how much space we will
> actually need for the number of containers we will be running. Really it's
> just about the underlying file system for the containers. It seems like
> there should be a basic white paper or something with just guidelines on
> best practices for LXD. That would really help us. We have found the LXD
> documentation and we have actually used these docs. We've even used ZFS
> under LXD on our first iteration of this project about 3 years ago. We are
> now looking to do this again. The first time was mostly a success. Recenly,
> we had the main LXD server die and for no apparent reason (hardware /
> software / memory - the logs don't really give us much). Our snapshot
> server was the savior, but now we need to repeat our earlier process, and
> if we made mistakes, we would like to fix them in the process.
> >
> > Thanks again for the response, any further information would be helpful.
> >
> > Steven G. Spencer
> >
> >
> >
> > On Fri, Jun 5, 2020 at 2:20 PM Andrey Repin  wrote:
> >>
> >> Greetings, Steven Spencer!
> >>
> >> > Is there a good link to use for specifying hardware requirements for
> an LXD dedicated server?
> >>
> >> There can't be specific requirements, it all depends on what you want
> to do,
> >> how many containers to run, etc.
> >>
> >>
> >> --
> >> With best regards,
> >> Andrey Repin
> >> Friday, June 5, 2020 22:05:54
> >>
> >> Sorry for my terrible english...
> >>
> >> ___
> >> lxc-users mailing list
> >> lxc-users@lists.linuxcontainers.org
> >> http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
>
> --
> Stéphane
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXD - Production Hardware Guide

2020-06-05 Thread Steven Spencer
Andrey and List,

Thanks so much for your response, and we understand all of that. We know
that if we have 3 containers the server configuration is going to be
different than if we have 30 or 100, and that we will have to size RAM,
Processors, etc, accordingly. What I think we are more interested in is: If
we use ZFS, is there a recommended way to use it? Should we use RAID of any
kind? If so, should it be hardware or software RAID? We realize, too, that
we will need to size our drives according to how much space we will
actually need for the number of containers we will be running. Really it's
just about the underlying file system for the containers. It seems like
there should be a basic white paper or something with just guidelines on
best practices for LXD. That would really help us. We have found the LXD
documentation and we have actually used these docs. We've even used ZFS
under LXD on our first iteration of this project about 3 years ago. We are
now looking to do this again. The first time was mostly a success. Recenly,
we had the main LXD server die and for no apparent reason (hardware /
software / memory - the logs don't really give us much). Our snapshot
server was the savior, but now we need to repeat our earlier process, and
if we made mistakes, we would like to fix them in the process.

Thanks again for the response, any further information would be helpful.

Steven G. Spencer



On Fri, Jun 5, 2020 at 2:20 PM Andrey Repin  wrote:

> Greetings, Steven Spencer!
>
> > Is there a good link to use for specifying hardware requirements for an
> LXD dedicated server?
>
> There can't be specific requirements, it all depends on what you want to
> do,
> how many containers to run, etc.
>
>
> --
> With best regards,
> Andrey Repin
> Friday, June 5, 2020 22:05:54
>
> Sorry for my terrible english...
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] LXD - Production Hardware Guide

2020-06-05 Thread Steven Spencer
Good Morning,

Is there a good link to use for specifying hardware requirements for an LXD
dedicated server?

Thanks,
Steven G. Spencer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Network and snapshots copied to another server

2019-04-04 Thread Steven Spencer
All,

We have a native LXD server (3.0.0) and I was curious about upgrading to
3.11 via snap. I installed an 18.04 LTS server and then installed lxd via
snap (3.11). I copied a few containers over that I could easily stop on the
native server. Installing 18.04 LTS server installs a native copy of LXD
(3.0.3) and so my first tests were just starting the container using the
native installed packages (no snap at this point). What I wasn't expecting
is that the static IP set on the CentOS 7 container did not follow it with
the snapshot, in fact it had the generic sysconfig ifcfg-eth0 settings as
if it was a new CentOS 7 container unconfigured:

DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
HOSTNAME=rocketchat
NM_CONTROLLED=no
TYPE=Ethernet
MTU=
DHCP_HOSTNAME=`hostname`

If I set the configuration to a static IP and upped the interface, it
worked as expected. I did a fair amount of searching on why the snapshot
does not contain the network information, but came up empty. Is this by
design and if so, is there a way to include the network settings as they
are on the production container with the snapshot?

My goal here was ultimately to test lxd.migrate with a few containers
copied over (snapshots) and that does seem to work, sans the network
information. (yes, the lxd.migrate is a totally separate issue, just
letting you know what my goal was when I started this.)

Thanks,
Steven G. Spencer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] future of lxc/lxd? snap?

2019-03-28 Thread Steven Spencer
All,

Regarding the availability of snapd for CentOS 7 (or Red Hat Enterprise),
you need only enable the epel repository prior to attempting to install
snapd:

yum install epel-release

Then install snapd:

yum update
yum install snapd

That's it.

Thanks,
Steven G. Spencer

On Sun, Mar 24, 2019 at 3:30 PM  wrote:

> Hello
> have you read this documentation ?
> https://docs.snapcraft.io/installing-snap-on-centos/10020
>
> 26 février 2019 09:28 "Harald Dunkel"  a écrit:
>
> > On 2/25/19 11:20 AM, Stéphane Graber wrote:
> >> snapd + LXD work fine on CentOS 7, it's even in our CI environment, so
> >> presumably the same steps should work on RHEL 7.
> >>
> > Apparently it doesn't work that fine:
> >
> > [root@centos7 ~]# yum install snapd
> > Loaded plugins: fastestmirror, langpacks
> > Loading mirror speeds from cached hostfile
> > * base: ftp.halifax.rwth-aachen.de
> > * extras: ftp.halifax.rwth-aachen.de
> > * updates: mirror.infonline.de
> > No package snapd available.
> > Error: Nothing to do
> >
> > Of course I found some howtos on the net (e.g.
> > https://computingforgeeks.com/install-snapd-snap-applications-centos-7),
> > but thats not the point. The point is to integrate LXD without 3rd-party
> > tools that are difficult to find and install on their own.
> >
> > Surely I don't blame you for the not-invented-here approach of others,
> but
> > LXD appears to be difficult to build or integrate, even on native Debian.
> >
> > Regards
> > Harri
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXD: modify IP of snapshot before starting

2018-12-09 Thread Steven Spencer
Thanks for the responses. The container IP (CentOS 7) is set via
/etc/sysconfig/network-interfaces/ifcfg-eth0. I actually didn't use the
copy command, and maybe that is what I should do in this case. We do a
snapshot and then build a new container from the snapshot. The only time we
do this is when the container is nearly the same as another one. (same
processes for a different entity). In most cases, we build a new container
from previously set templates. I will try the copy command as you suggest
in this case.

Thanks,
Steven G. Spencer

On Fri, Dec 7, 2018 at 10:31 PM Serge E. Hallyn  wrote:

> On Fri, Dec 07, 2018 at 09:34:14AM -0600, Steven Spencer wrote:
> > All,
> >
> > My Google search turned up empty, so I'm turning to the list to see if
> this
> > is possible:
> >
> > * In LXD I make a copy of a container, but want to create a new container
> > from it
> > * The container has a static assigned IP address, so if I bring up the
> new
>
> How is the static ip address assigned?  Using raw.lxc, using dhcp config
> on the host, using /etc/network/interfaces or the like in the container's
> rootfs?
>
> How are you currently copying it?  Are you using lxc copy --stateless?
> Can you just pass '-c ' to the lxc copy command to change the
> ipv4 configuration?
>
> > container with the other one running, I'm going to end up with an IP
> > conflict
> > * What I'd like to be able to do is to change the IP of the snapshot
> before
> > creating a container out of it.
> >
> > Is that possible, or am I missing another method.  I've already done this
> > step before, which works, but isn't the best if you want to keep systems
> up.
> >
> > * Stop the original container
> > * create the new container with the snapshot
> > * modify the IP of the new container
> > * start the original container
> >
> > If it isn't possible, I'll continue on as I've been doing.
> >
> > Thanks,
> > Steven G. Spencer
>
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] LXD: modify IP of snapshot before starting

2018-12-07 Thread Steven Spencer
All,

My Google search turned up empty, so I'm turning to the list to see if this
is possible:

* In LXD I make a copy of a container, but want to create a new container
from it
* The container has a static assigned IP address, so if I bring up the new
container with the other one running, I'm going to end up with an IP
conflict
* What I'd like to be able to do is to change the IP of the snapshot before
creating a container out of it.

Is that possible, or am I missing another method.  I've already done this
step before, which works, but isn't the best if you want to keep systems up.

* Stop the original container
* create the new container with the snapshot
* modify the IP of the new container
* start the original container

If it isn't possible, I'll continue on as I've been doing.

Thanks,
Steven G. Spencer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Migration from LXD 2.x packages to LXD 3.0 snap

2018-05-04 Thread Steven Spencer
Thomas,

I don't know if you have been able to answer your own question or not, but
moving from 2.x on my workstation to 3.x did not interrupt the containers I
had running on my local machine. It does not mean that if you have a
specific filesystem back end (btrfs, zfs) that there isn't something that
you need to do. Hopefully you've been able to continue on!

Steve

On Fri, Apr 20, 2018 at 11:42 AM Thomas Ward  wrote:

> I'm currently using the Ubuntu backports repositories in 16.04 to get LXD
> 2.x packages.  LXD 3.0 was released recently, and it seems to work better
> with the networking now, getting over issues I had within the LXD 2.x
> series snaps.
>
> However, I've got a bunch of containers running within the LXD 2.x
> infrastructure.  Is there any documentation on how I go about moving from
> the LXD 2.x packages to the LXD 3.0 snap?  Short of rebuilding the entire
> system again, that is.
>
>
> Thomas
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD project status

2018-03-27 Thread Steven Spencer
Thanks for all of the comments back.

Per Sean McNamara's numbered remarks:

1.) That makes sense and it is what I figured
2.) I'm fully aware of project status coming into a stable state, I was
just trying for some clarity, which both you and Stéphane have provided
3.) This is always a good idea and I may personally contribute. I can't
necessarily depend on my company to do so, even thought that is the way
things /should/ work.

Per Stéphane Graber:

Thanks again, that all makes sense.

On Tue, Mar 27, 2018 at 2:05 PM, Michel Jansens 
wrote:

> Great!, I’m looking forward to that :-)
>
> Michel
>
> > On 27 Mar 2018, at 21:02, Stéphane Graber  wrote:
> >
> > Yes
> >
> > On Tue, Mar 27, 2018 at 08:45:03PM +0200, Michel Jansens wrote:
> >> Hi Stéphane,
> >>
> >> Does this means LXD 3.0 will be part of Ubuntu 18.04 next month?
> >>
> >> Cheers,
> >> Michel
> >>> On 27 Mar 2018, at 19:44, Stéphane Graber  wrote:
> >>>
> >>> We normally release a new feature release every month and have been
> >>> doing so until the end of December where we've then turned our focus on
> >>> our next Long Term Support release, LXD 3.0 which is due out later this
> >>> week.
> >>
> >
> >> ___
> >> lxc-users mailing list
> >> lxc-users@lists.linuxcontainers.org
> >> http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> >
> > --
> > Stéphane Graber
> > Ubuntu developer
> > http://www.ubuntu.com
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD project status

2018-03-27 Thread Steven Spencer
This is probably a message that Stephane Graber can answer most
effectively, but I just want to know that the LXD project is continuing to
move forward. Our organization did extensive testing of LXD in 2016 and
some follow-up research in 2017 and plan to utilize this as our
virtualization solution starting in April of this year. In 2017, there were
updates to LXD at least once a month, but the news has been very quiet
since December.

To properly test LXD as it would work in our environment, we did extensive
lab work with it back in 2016 and some follow-up testing in 2017. While we
realize that there are no guarantees in our industry, I'd just like to know
that, at least for now, LXD is still a viable project and that development
hasn't suddenly come to a screeching halt.

Thanks for your consideration.

Steve Spencer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC container isolation with iptables?

2018-03-03 Thread Steven Spencer
Honestly, unless I'm spinning up a container on my local desktop, I always
use the routed method. Because our organization always thinks of a
container as a separate machine, it makes the build pretty similar whether
the machine is on the LAN or WAN side of the network. It does, of course,
require that each container run its own firewall, but that's what we would
do with any machine on our network.

On Thu, Mar 1, 2018 at 2:18 PM, Jan Kowalsky 
wrote:

>
>
> Am 28.02.2018 um 05:04 schrieb Fajar A. Nugraha:
> > On Wed, Feb 28, 2018 at 12:21 AM, bkw - lxc-user
> >  wrote:
> >> I have an LXC host.  On that host, there are several unprivileged
> >> containers.  All containers and the host are on the same subnet, shared
> via
> >> bridge interface br0.
> >>
> >> If container A (IP address 192.168.1.4) is listening on port 80, can I
> put
> >> an iptables rule in place on the LXC host machine, that would prevent
> >> container B (IP address 192.168.1.5) from having access to container A
> on
> >> port 80?
> >>
> >> I've tried this set of rules on the LXC host, but they don't work:
> >>
> >> iptables -P INPUT DROP
> >> iptables -P FORWARD DROP
> >> iptables -P OUTPUT ACCEPT
> >> iptables -A FORWARD -j DROP -s 192.168.1.5 -d 192.168.1.4
> >>
> >> Container B still has access to container A's port 80.
> >
> >
> > That's how generic bridges work.
> >
> > Some possible ways to achieve what you want:
> > - don't use bridge. Use routed method. IIRC this is possible in lxc,
> > but not easy in lxd.
> > - create separate bridges for each container, e.g with /30 subnet
> > - use 'external' bridge managed by openvswitch, with additional
> > configuration (on openvswitch side) to enforce the rule. IIRC there
> > were examples on this list to do that (try searching the archives)
>
> you could also use the --physdev-in / --physdev-out extension of
> iptables to address the devices of the containers directly. Of course
> you have to fix the device name of the network devices with
> lxc.network.veth.pair. Problem could be that according to manpage
> lxc.container.conf this seems not possible for unprivileged containers.
> For this reason probably also the routed method could hve it's
> difficulties.
>
> Regards
> Jan
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] openvz to lxd/lxc

2016-09-12 Thread Steven Spencer
I am using LXD, so sorry about that confusion. In my case, I think an rsync
or lsyncd will probably work best for converting the old openvz
containers.  Thanks for the response.

On Mon, Sep 12, 2016 at 12:56 PM, Sean McNamara <smc...@gmail.com> wrote:

> Firstly you will need to decide whether you want to use lxc, or lxd.
> There is no such thing as "lxd/lxc" as the two tools are completely
> separate, pretty different in behavior, and the interface for
> interacting with them is completely different.
>
> However, in the general case of any arbitrary Linux-based OS in an
> OpenVZ container, there's a fairly high chance that some random
> service or another will fail (for a multitude of reasons) if you try
> to boot its disk image under lxc or lxd; indeed, it may not even boot.
>
> It depends on what your container's OS is, though. An OpenVZ image of,
> say, Ubuntu 16.04, *might* be a better fit for direct copying into an
> lxc or lxd container, though you would still have to at least adjust
> things like networking, user and group IDs, and things like that.
>
> There is no general-purpose "command structure" that will work with
> *all* OpenVZ containers across *all* OpenVZ versions and allow
> successful, problem-free importing into any arbitrary version of lxc
> or lxd, though. You can try copying the raw files of the image into an
> lxd or lxc container you've created from scratch and then see if it
> boots, and fix it where it fails; or you can take the (IMO) easier
> path, and create a new container and then copy over the files you need
> via the host.
>
>
> On Mon, Sep 12, 2016 at 1:33 PM, Steven Spencer <sspencerw...@gmail.com>
> wrote:
> > Greetings,
> >
> > Is there a command structure that will allow for an import of an openvz
> > container into lxd/lxc or is the best method to use a new container and
> > rsync any content needed?
> >
> > Thanks,
> > Steven G. Spencer
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] openvz to lxd/lxc

2016-09-12 Thread Steven Spencer
Greetings,

Is there a command structure that will allow for an import of an openvz
container into lxd/lxc or is the best method to use a new container and
rsync any content needed?

Thanks,
Steven G. Spencer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC 2.0.4, LXCFS 2.0.3 and LXD 2.0.4 have been released!

2016-08-19 Thread Steven Spencer
Thanks Ed!

On Fri, Aug 19, 2016 at 9:47 AM, McDonagh, Ed <ed.mcdon...@rmh.nhs.uk>
wrote:

> Quoting from Stéphane’s response to the same question from me for the last
> release…
>
>
>
> ‘It takes a week for something to go from proposed to updates, that's to
> allow getting early feedback on any kind of regression.’
>
>
>
> *From:* lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] *On
> Behalf Of *Steven Spencer
> *Sent:* 19 August 2016 14:39
> *To:* LXC users mailing-list
> *Subject:* Re: [lxc-users] LXC 2.0.4, LXCFS 2.0.3 and LXD 2.0.4 have been
> released!
>
>
>
> Stéphane,
>
>
>
> I was wondering why the Xenial upstream doesn't appear to have 2.0.4 yet.
> I have an couple of LXD instances running Ubuntu 16.04 and an sudo apt-get
> update && sudo apt-get upgrade returns no new update to LXD.
>
>
>
> Thanks,
>
> Steve
>
>
>
> On Mon, Aug 15, 2016 at 10:39 PM, Stéphane Graber <stgra...@ubuntu.com>
> wrote:
>
> Hello everyone,
>
> Today the LXC project is pleased to announce the release of:
>  - LXC 2.0.4
>  - LXD 2.0.4
>  - LXCFS 2.0.3
>
> They each contain the accumulated bugfixes since the previous round of
> bugfix releases a bit over a month ago.
>
> The detailed changelogs can be found at:
>  - https://linuxcontainers.org/lxc/news/
>  - https://linuxcontainers.org/lxcfs/news/
>  - https://linuxcontainers.org/lxd/news/
>
> As a reminder, the 2.0 series of all of those is supported for bugfix
> and security updates up until June 2021.
>
> Thanks to everyone who contributed to those projects and helped make
> this possible!
>
>
> Stéphane Graber
> On behalf of the LXC, LXCFS and LXD development teams
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
>
> --
> *Attention: *
>
>
>
> *This e-mail and any attachment is for authorised use by the intended
> recipient(s) only. It may contain proprietary, confidential and/or
> privileged information and should not be copied, disclosed, distributed,
> retained or used by any other party. If you are not an intended recipient
> please notify the sender immediately and delete this e-mail (including
> attachments and copies). The statements and opinions expressed in this
> e-mail are those of the author and do not necessarily reflect those of the
> Royal Marsden NHS Foundation Trust. The Trust does not take any
> responsibility for the statements and opinions of the author. Website:
> http://www.royalmarsden.nhs.uk <http://www.royalmarsden.nhs.uk>*
>
> --
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC 2.0.4, LXCFS 2.0.3 and LXD 2.0.4 have been released!

2016-08-19 Thread Steven Spencer
Stéphane,

I was wondering why the Xenial upstream doesn't appear to have 2.0.4 yet. I
have an couple of LXD instances running Ubuntu 16.04 and an sudo apt-get
update && sudo apt-get upgrade returns no new update to LXD.

Thanks,
Steve

On Mon, Aug 15, 2016 at 10:39 PM, Stéphane Graber 
wrote:

> Hello everyone,
>
> Today the LXC project is pleased to announce the release of:
>  - LXC 2.0.4
>  - LXD 2.0.4
>  - LXCFS 2.0.3
>
> They each contain the accumulated bugfixes since the previous round of
> bugfix releases a bit over a month ago.
>
> The detailed changelogs can be found at:
>  - https://linuxcontainers.org/lxc/news/
>  - https://linuxcontainers.org/lxcfs/news/
>  - https://linuxcontainers.org/lxd/news/
>
> As a reminder, the 2.0 series of all of those is supported for bugfix
> and security updates up until June 2021.
>
> Thanks to everyone who contributed to those projects and helped make
> this possible!
>
>
> Stéphane Graber
> On behalf of the LXC, LXCFS and LXD development teams
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users