Re: [lxc-users] Clarification

2018-05-26 Thread Sean McNamara
That's correct. LXC and LXD are not a hypervisor; the container runs
user-space code on the same kernel as the host kernel. Windows has its
own separate kernel and its userspace is incapable of running on the
Linux kernel for many reasons (different executable format, different
kernel ABI/API, different responsibilities for userspace vs. kernel,
etc.)

*Some* Windows programs can be emulated with limited success using
Wine, but Wine will probably never support 100% of all Windows
programs at full functionality. If you wanted to run a specific
Windows-only program that happened to be supported by Wine in an LXC
or LXD container, you could do that by installing a Linux distribution
in a container, and running the program under wine in the container.
That would probably work, but your results would depend on how well
Wine supports the program you need to run.

If you really want to just run *Windows itself* rather than emulating
certain programs, the only way to do that on top of Linux is to use a
proper hypervisor, like KVM, VMware, or VirtualBox. They will emulate
physical *hardware* and run the real Windows kernel on top of your
base operating system -- with a performance cost, of course.



On Sat, May 26, 2018 at 6:51 PM, Thouraya TH  wrote:
> Hi all,
> containers share the same operating system as the host.
> so i cnanot do  lxc-create -n c1 -o windows on ubuntu system ? that's it ?
> i can create windows container only on windows system using docker for
> example ?
> Thank you so much for answer.
> Kind regards.
>
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Issue upgrading from LXD apt to snap

2018-04-11 Thread Sean McNamara
Update:

The error about the DB schema from the apt version of lxd
(/usr/bin/lxd) was due to apt, for some reason, deciding to install
lxd *2.0.11* instead of the 2.21 I had been using from backports. I
forced the backport version and now both LXD daemons start
concurrently!

However, lxd.migrate doesn't complete cleanly. See below. I am using
ZFS for storage but I have no idea why it's trying to umount / !

# lxd.migrate
=> Connecting to source server
=> Connecting to destination server
=> Running sanity checks

=== Source server
LXD version: 2.21
LXD PID: 17828
Resources:
  Containers: 9
  Images: 3
  Networks: 1
  Storage pools: 1

=== Destination server
LXD version: 3.0.0
LXD PID: 3238
Resources:
  Containers: 0
  Images: 0
  Networks: 0
  Storage pools: 0

The migration process will shut down all your containers then move
your data to the destination LXD.
Once the data is moved, the destination LXD will start and apply any
needed updates.
And finally your containers will be brought back to their previous
state, completing the migration.

Are you ready to proceed (yes/no) [default=no]? yes
=> Shutting down the source LXD
=> Stopping the source LXD units
=> Stopping the destination LXD unit
=> Unmounting source LXD paths
=> Unmounting destination LXD paths
=> Wiping destination LXD clean
=> Moving the data
=> Moving the database
=> Backing up the database
=> Opening the database
=> Updating the storage backends
error: Failed to update the storage pools: Failed to run: zfs set
mountpoint=/var/lib/snapd/hostfs/ tank/root: umount: /: target is busy
(In some cases useful info about processes that
 use the device is found by lsof(8) or fuser(1).)
cannot unmount '/': umount failed


Current state of my ZFS volumes:

# zfs list
NAME
USED  AVAIL  REFER  MOUNTPOINT
tank
556G  1.21T19K  none
tank/containers
182G  1.21T19K  none
tank/containers/cont1
   18.9G  1.21T  18.1G
/var/snap/lxd/common/lxd/storage-pools/tank/containers/cont1
tank/containers/cont2
 5.85G  1.21T  6.03G
/var/snap/lxd/common/lxd/storage-pools/tank/containers/cont2
tank/containers/cont3
 252M  1.21T   433M
/var/snap/lxd/common/lxd/storage-pools/tank/containers/cont3
tank/containers/cont4
   10.5G  1.21T  9.00G
/var/snap/lxd/common/lxd/storage-pools/tank/containers/cont4
#... etc (for several more containers)





On Wed, Apr 11, 2018 at 11:54 PM, Sean McNamara <smc...@gmail.com> wrote:
> Hi,
>
> I'm on Ubuntu 16.04; the last working version of LXD I had was 2.21
> from backports.
>
> I wanted to upgrade to 3.0 but it isn't in the backports repo yet, it
> seems, so I went ahead and installed the snap. Then I removed the
> original package, perhaps not realizing until it was too late that I
> wasn't suppposed to do that.
>
> It seems the lxd.migrate command that you're *supposed* to run (which
> I wasn't aware of until recently) requires the original lxd server to
> be running -- that is, in my case, the one installed via apt. However,
> that one now won't start. It says:
>
> error: The database schema is more recent than LXD's schema.
>
> I wish the lxd binary had more flexibility and information about:
>
>  - How do I identify the directory where this lxd command is looking
> for its database/config files?
>  - How do I *tell it* to use a specific (non-default) directory for
> its database/config files?
>  - What environment variables should I check, if those are applicable?
>
> That info should go in `lxd --help`, IMO. As of now I'm stuck on this
> error. I assume I have a working LXD database in the normal /var
> location that somehow got updated to database schema for version 3.0,
> but I can't start *either* version of LXD now.
>
> When I try to start /usr/bin/lxd, I get the above error. When I try to
> start /snap/lxd/bin/lxd, it just hangs trying to connect:
>
> # /snap/lxd/current/bin/lxd -d
> DBUG[04-12|03:54:05] Connecting to a local LXD over a Unix socket
> DBUG[04-12|03:54:05] Sending request to LXD   etag=
> method=GET url=http://unix.socket/1.0
>
> How can I salvage my original LXD config / containers in /var/lib/lxd
> and make it work with the snap?
>
> Thanks,
>
> Sean
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Issue upgrading from LXD apt to snap

2018-04-11 Thread Sean McNamara
Hi,

I'm on Ubuntu 16.04; the last working version of LXD I had was 2.21
from backports.

I wanted to upgrade to 3.0 but it isn't in the backports repo yet, it
seems, so I went ahead and installed the snap. Then I removed the
original package, perhaps not realizing until it was too late that I
wasn't suppposed to do that.

It seems the lxd.migrate command that you're *supposed* to run (which
I wasn't aware of until recently) requires the original lxd server to
be running -- that is, in my case, the one installed via apt. However,
that one now won't start. It says:

error: The database schema is more recent than LXD's schema.

I wish the lxd binary had more flexibility and information about:

 - How do I identify the directory where this lxd command is looking
for its database/config files?
 - How do I *tell it* to use a specific (non-default) directory for
its database/config files?
 - What environment variables should I check, if those are applicable?

That info should go in `lxd --help`, IMO. As of now I'm stuck on this
error. I assume I have a working LXD database in the normal /var
location that somehow got updated to database schema for version 3.0,
but I can't start *either* version of LXD now.

When I try to start /usr/bin/lxd, I get the above error. When I try to
start /snap/lxd/bin/lxd, it just hangs trying to connect:

# /snap/lxd/current/bin/lxd -d
DBUG[04-12|03:54:05] Connecting to a local LXD over a Unix socket
DBUG[04-12|03:54:05] Sending request to LXD   etag=
method=GET url=http://unix.socket/1.0

How can I salvage my original LXD config / containers in /var/lib/lxd
and make it work with the snap?

Thanks,

Sean
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD project status

2018-03-30 Thread Sean McNamara
On Fri, Mar 30, 2018 at 8:42 PM, Saint Michael  wrote:
> I am using LCX, plain vanilla. Is there a reading the can help me move to
> LXD 3.0? I am afraid I cannot see why would anybody use LXD vs regular LXC.
> I can do anything I need, so far, with LXC. To copy a container to another
> server I use rsync with some special parameters.
> In general what is the great advantage of using LXD?

LXD is based on the same technologies as LXC, and has no special
kernel component, so it can only use the same kernel interfaces LXC
uses for containerization. So from that perspective, anything you can
do with LXC, you can do with LXD, and vice versa.

A major benefit of LXD is in the simplicity of setting up containers
that are isolated from the host and eachother, with the ability to
treat them like VMs with your security posture. To achieve that on LXC
is significantly more work.

Also, networking is IMO significantly easier with LXD for many common
setups. You won't notice much easier networking in LXD 2.0, but the
latest stable release (2.21) is certainly nice with the amount of work
it does for you.

The goal of LXD is to become as secure and simple as something like
kvm/qemu/vmware/virtualbox, but without any of the overhead of a
hypervisor, kernel on top of kernel, filesystem on top of filesystem,
etc.

See also: https://discuss.linuxcontainers.org/t/comparing-lxd-vs-lxc/24

Sean

>
>
> On Fri, Mar 30, 2018 at 10:40 AM, Simos Xenitellis
>  wrote:
>>
>> On Fri, Mar 30, 2018 at 6:08 AM, gunnar.wagner
>>  wrote:
>> >
>> > so the 'snap-only' policy I thought would be applied for LXD is not that
>> > strict then and traditional .deb packages still exist?
>> >
>>
>> The way I see it, is that it is just Ubuntu 18.04 LTS that gets the .deb
>> package
>> and will keep having it until 2018+5=2023.
>>
>> Ubuntu 16.04 will keep having LXD 2.0.x from the deb repositories
>> until 2016+5=2021.
>>
>> Is it such an issue to have the snap version of LXD?
>>
>> Simos
>>
>> >
>> > On 3/29/2018 8:44 PM, Simos Xenitellis wrote:
>> >
>> > On Thu, Mar 29, 2018 at 4:32 AM, gunnar.wagner
>> >  wrote:
>> >
>> > On 3/28/2018 2:45 AM, Michel Jansens wrote:
>> >
>> > Does this means LXD 3.0 will be part of Ubuntu 18.04 next month?
>> >
>> > I guess (as LXD is using snap packages by default, right) it's not a
>> > matter
>> > of distribution any lomnger but more of distribution able to run snap
>> > packages well (which not every distribution does as far as I know [i.e.
>> > OpenSUSE])
>> >
>> > Ubuntu 18.04 LTS will be based on LXD 3.0.xx, supported until 2018+5y =
>> > 2023.
>> > Those that have the LXD snap ('lxd', stable channel), are likely to
>> > get upgraded to 3.1, 3.2 and so on,
>> > as the new versions appear.
>> > It was mentioned on the forum in December that Ubuntu 18.04 LTS will
>> > have by default the .deb version of LXD 3.0.
>> >
>> > This happened with Ubuntu 16.04 LTS, which has LXD 2.0.xx (currently at
>> > 2.0.11)
>> > and is supported until 2021. Ubuntu 16.04 LTS was launched with the
>> > new LXD 2.0 at that time.
>> >
>> > When you do 'snap info lxd', you get
>> >
>> > ...
>> > channels:
>> >   stable:2.21(5866) 49MB -
>> >   candidate: 2.21(6005) 51MB -
>> >   beta:  3.0.0.beta7 (6240) 55MB -
>> >   edge:  git-9a60cd9 (6251) 55MB -
>> >   2.0/stable:2.0.11  (5384) 21MB -
>> >   2.0/candidate: 2.0.11  (5384) 21MB -
>> >   2.0/beta:  ↑
>> >   2.0/edge:  git-d71807e (6069) 20MB -
>> >
>> > which means that there is the option to switch to the snap 'LTS'
>> > version of LXD 2.0 ('2.0/stable').
>> >
>> > Simos
>> > ___
>> > lxc-users mailing list
>> > lxc-users@lists.linuxcontainers.org
>> > http://lists.linuxcontainers.org/listinfo/lxc-users
>> >
>> > ---
>> > This email has been checked for viruses by AVG.
>> > http://www.avg.com
>> >
>> >
>> > --
>> > Gunnar Wagner | Yongfeng Village Group 12 #5, Pujiang Town, Minhang
>> > District, 201112 Shanghai, P.R. CHINA
>> > mob +86.159.0094.1702 | skype: professorgunrad | wechat: 15900941702
>> >
>> > ___
>> > lxc-users mailing list
>> > lxc-users@lists.linuxcontainers.org
>> > http://lists.linuxcontainers.org/listinfo/lxc-users
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD project status

2018-03-27 Thread Sean McNamara
I'm sure stgraber will weigh in to provide the details of the
project's long-term plans, but here are a few things to think about
meanwhile:

1. It looks like they're preparing for LXD 3.0. This hasn't taken the
shape of any stable releases since December, true, but there are
frequent beta releases -- here's the commit for beta 7 just recently,
and it seems there's been a beta approximately weekly since about the
new year. 
https://github.com/lxc/lxd/commit/8cc13c42f1c34408ca66b875394ffa81ff70fa59

2. A lack of consistent releases doesn't necessarily mean a project is
dead. It can mean the project is in a really good state right now and
doesn't really need improvement, or the developers are spending a lot
of time working on big new features that require a lot of work before
anything can be demoed or merged into master, even in prerelease
shape.

3. If you're planning to use LXD for a commercial purpose, I would
strongly suggest investing in paid support from Canonical to get that
extra assurance of the continuity of support longer term. They still
offer this on their website, which tells me they must be retaining
some LXD developers who can help you and actually provide that
support, including patches if you find bugs that are breaking your
production workloads. Your investment in LXD support would also go a
long way to helping ensure the project remains maintained for the
foreseeable future, even if you don't need any patches yourself.

Canonical may have had some spectacular product failures in recent
years (Ubuntu Phone, Unity among others), but I don't think LXD is
among them. And in any case, when you are assessing the activity of a
project, check the Git commit logs (including in feature branches, not
just master) rather than the releases -- releases don't say a whole
lot about a product's activity; Git commits are a finer-grained
indicator.

Good luck,

Sean



On Tue, Mar 27, 2018 at 1:19 PM, Steven Spencer  wrote:
> This is probably a message that Stephane Graber can answer most effectively,
> but I just want to know that the LXD project is continuing to move forward.
> Our organization did extensive testing of LXD in 2016 and some follow-up
> research in 2017 and plan to utilize this as our virtualization solution
> starting in April of this year. In 2017, there were updates to LXD at least
> once a month, but the news has been very quiet since December.
>
> To properly test LXD as it would work in our environment, we did extensive
> lab work with it back in 2016 and some follow-up testing in 2017. While we
> realize that there are no guarantees in our industry, I'd just like to know
> that, at least for now, LXD is still a viable project and that development
> hasn't suddenly come to a screeching halt.
>
> Thanks for your consideration.
>
> Steve Spencer
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC master: Legacy Config Items Have Been Removed

2018-02-12 Thread Sean McNamara
For LXD, is it true that the only potential impact is if you use
lxc.raw in a config or profile?

Sean

On Mon, Feb 12, 2018 at 4:37 AM, Christian Brauner
 wrote:
> Hey everyone,
>
> We've been making good progress on the way to 3.0 for all affected
> projects under the LXC umbrella. One of the more invasive steps we have
> undertaken yesterday is to remove the support for all legacy
> configuration items from the LXC master branch. We have announced this
> back when LXC 2.1 was
> released: 
> https://linuxcontainers.org/lxc/news/#lxc-21-release-announcement-5th-of-september-2017
> Below, you will find a list of configuration items that have been
> removed and their equivalent new configuration items. Note that LXC
> ships an upgrade script
>
> lxc-update-config -h|--help [-c|--config]
>
> which can be used to update a legacy config to a new config. The script
> will make a backup of the old config with the extension *.backup in the
> same directory where the old config resides. In case of a failed update
> the legacy config can easily be restored.
>
> Christian
>
>
> Legacy Key (removed from LXC master) | New Key   |
> -|---|
> lxc.aa_profile   | lxc.apparmor.profile  |
> lxc.aa_allow_incomplete  | lxc.apparmor.allow_incomplete |
> lxc.console  | lxc.console.path  |
> lxc.devttydir| lxc.tty.dir   |
> lxc.haltsignal   | lxc.signal.halt   |
> lxc.id_map   | lxc.idmap |
> lxc.init_cmd | lxc.init.cmd  |
> lxc.init_gid | lxc.init.gid  |
> lxc.init_uid | lxc.init.uid  |
> lxc.limit| lxc.prlimit   |
> lxc.logfile  | lxc.log.file  |
> lxc.loglevel | lxc.log.level |
> lxc.mount| lxc.mount.fstab   |
> lxc.network  | lxc.net   |
> lxc.network. | lxc.net.[i].  |
> lxc.network.flags| lxc.net.[i].flags |
> lxc.network.hwaddr   | lxc.net.[i].hwaddr|
> lxc.network.ipv4 | lxc.net.[i].ipv4.address  |
> lxc.network.ipv4.gateway | lxc.net.[i].ipv4.gateway  |
> lxc.network.ipv6 | lxc.net.[i].ipv6.address  |
> lxc.network.ipv6.gateway | lxc.net.[i].ipv6.gateway  |
> lxc.network.link | lxc.net.[i].link  |
> lxc.network.macvlan.mode | lxc.net.[i].macvlan.mode  |
> lxc.network.mtu  | lxc.net.[i].mtu   |
> lxc.network.name | lxc.net.[i].name  |
> lxc.network.script.down  | lxc.net.[i].script.down   |
> lxc.network.script.up| lxc.net.[i].script.up |
> lxc.network.type | lxc.net.[i].type  |
> lxc.network.veth.pair| lxc.net.[i].veth.pair |
> lxc.network.vlan.id  | lxc.net.[i].vlan.id   |
> lxc.pts  | lxc.pty.max   |
> lxc.rebootsignal | lxc.signal.reboot |
> lxc.rootfs   | lxc.rootfs.path   |
> lxc.se_context   | lxc.selinux.context   |
> lxc.seccomp  | lxc.seccomp.profile   |
> lxc.stopsignal   | lxc.signal.stop   |
> lxc.syslog   | lxc.log.syslog|
> lxc.tty  | lxc.tty.max   |
> lxc.utsname  | lxc.uts.name  |
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] selenium

2017-06-04 Thread Sean McNamara
Sorry, I forgot to include the link for setting up Xorg with GPU
acceleration on LXD:
https://discuss.linuxcontainers.org/t/howto-how-to-run-graphics-accelerated-gui-apps-in-lxd-containers-on-your-ubuntu-desktop/94

On Sun, Jun 4, 2017 at 11:35 AM, Sean McNamara <smc...@gmail.com> wrote:
> Firefox is one of those web browsers that conventionally requires an X
> server to be running for it to execute. This might tempt you to want
> to start up a *physical* Xorg server backed by real graphics hardware,
> but that's very wasteful of resources, unless you want to look at /
> interact with the Firefox instances "live" on the console monitor of
> the computer running these Firefox instances.
>
> If you *do* want to pass enough GPU hardware into the LXD containers
> that you could run a proper desktop environment in a container, with
> access to the GPU, you can start with this guide:
>
> Otherwise, you have plenty of options that do not require any special
> hardware access, and will run on default unprivileged LXD instances
> without any customization from the LXD side.
>
> The two general approaches are:
>
> (1) Run a *headless* web browser, meaning, a web browser that does not
> require an X server. These browsers' visual rendering output is
> generally only available using Selenium's "take screenshot" command.
> You can't view it live.
>
> -or-
>
> (2) Run a *virtual* X server that isn't backed by any hardware, just
> system memory -- like Xvfb, Xvnc, or Xrdp. Once you're running Xvfb or
> another virtual X server, and set the DISPLAY environment variable to
> match the display the Xvfb instance is running under, any web browser
> that normally requires an X server, like Chrome or Firefox, will work,
> and they'll render their windows in software to the virtual X server.
> With this setup, you have the option to be able to view the X server's
> display head "live" by connecting to it through a supported remote
> desktop protocol. For example, Xvnc would allow you to connect to the
> X server over VNC. You might also be able to set up a separate VNC or
> RDP server that lets you connect to an Xvfb instance.
>
>
> As far as performance, headless web browsers can do many optimizations
> -- like lazy rendering, where it only renders if you request a
> screenshot -- to save CPU cycles when performing Selenium tests. You
> don't have to worry about GPU requirements though; most webpages can
> still be rendered efficiently purely on the CPU with no hardware
> acceleration. The notable exception would be if any of the sites you
> visit with Selenium need WebGL or WebCL APIs.
>
>
> To learn more about the headless approach, here are a few options you
> can explore, in order from newest / most experimental to oldest / most
> bitrotten:
>
> (a) Headless Chrom(ium,e) and Headless Firefox are now a thing, at
> least in their alpha builds. Eventually the code for headless
> Chromium/Firefox with Selenium support will make it to a stable
> release of Firefox and Chrome. You can either wait until that happens,
> or grab a pre-release build today and try it out.
>
> http://www.cnx-software.com/2017/04/13/headless-mode-to-be-supported-in-chrome-and-firefox-browsers/
>
> (2) Use jBrowserDriver, which has the advantage that it's very CPU
> efficient, but the disadvantage that it uses a version of WebKit
> embedded within the Java runtime. So the recentness of its JavaScript
> APIs and HTML5 feature availability will depend on the version of Java
> you use. The latest stable Java 8 from Oracle still uses a WebKit
> that's about a year and a half behind the latest Chrome/Firefox in web
> standards support. Fortunately jBrowserDriver is being actively
> maintained.
>
> https://github.com/MachinePublishers/jBrowserDriver
>
> (3) Use PhantomJS, which is also CPU efficient, doesn't depend on
> Java, but is basically unmaintained now -- its prior maintainers have
> moved on, so don't count on major new fixes and features to be
> released any time soon, if ever. PhantomJS is also based on WebKit,
> but from Qt -- and a rather outdated version of it at this point --
> from around 2014. PhantomJS hasn't seen any code changes since
> February, which is a good sign that the project is now entering its
> abandonware phase.
>
>
> Neither PhantomJS nor jBrowserDriver uses Chromium's V8 for
> high-performance JavaScript, either, so they're significantly less CPU
> efficient when it comes to churning through lots of JS code. They use
> upstream WebKit's "JSCore", which isn't great. It's barely serviceable
> and kind of (but not entirely) up to modern web standards.
>
>
> If nothing in the headless category meets your needs (yet?), you ca

Re: [lxc-users] selenium

2017-06-04 Thread Sean McNamara
Firefox is one of those web browsers that conventionally requires an X
server to be running for it to execute. This might tempt you to want
to start up a *physical* Xorg server backed by real graphics hardware,
but that's very wasteful of resources, unless you want to look at /
interact with the Firefox instances "live" on the console monitor of
the computer running these Firefox instances.

If you *do* want to pass enough GPU hardware into the LXD containers
that you could run a proper desktop environment in a container, with
access to the GPU, you can start with this guide:

Otherwise, you have plenty of options that do not require any special
hardware access, and will run on default unprivileged LXD instances
without any customization from the LXD side.

The two general approaches are:

(1) Run a *headless* web browser, meaning, a web browser that does not
require an X server. These browsers' visual rendering output is
generally only available using Selenium's "take screenshot" command.
You can't view it live.

-or-

(2) Run a *virtual* X server that isn't backed by any hardware, just
system memory -- like Xvfb, Xvnc, or Xrdp. Once you're running Xvfb or
another virtual X server, and set the DISPLAY environment variable to
match the display the Xvfb instance is running under, any web browser
that normally requires an X server, like Chrome or Firefox, will work,
and they'll render their windows in software to the virtual X server.
With this setup, you have the option to be able to view the X server's
display head "live" by connecting to it through a supported remote
desktop protocol. For example, Xvnc would allow you to connect to the
X server over VNC. You might also be able to set up a separate VNC or
RDP server that lets you connect to an Xvfb instance.


As far as performance, headless web browsers can do many optimizations
-- like lazy rendering, where it only renders if you request a
screenshot -- to save CPU cycles when performing Selenium tests. You
don't have to worry about GPU requirements though; most webpages can
still be rendered efficiently purely on the CPU with no hardware
acceleration. The notable exception would be if any of the sites you
visit with Selenium need WebGL or WebCL APIs.


To learn more about the headless approach, here are a few options you
can explore, in order from newest / most experimental to oldest / most
bitrotten:

(a) Headless Chrom(ium,e) and Headless Firefox are now a thing, at
least in their alpha builds. Eventually the code for headless
Chromium/Firefox with Selenium support will make it to a stable
release of Firefox and Chrome. You can either wait until that happens,
or grab a pre-release build today and try it out.

http://www.cnx-software.com/2017/04/13/headless-mode-to-be-supported-in-chrome-and-firefox-browsers/

(2) Use jBrowserDriver, which has the advantage that it's very CPU
efficient, but the disadvantage that it uses a version of WebKit
embedded within the Java runtime. So the recentness of its JavaScript
APIs and HTML5 feature availability will depend on the version of Java
you use. The latest stable Java 8 from Oracle still uses a WebKit
that's about a year and a half behind the latest Chrome/Firefox in web
standards support. Fortunately jBrowserDriver is being actively
maintained.

https://github.com/MachinePublishers/jBrowserDriver

(3) Use PhantomJS, which is also CPU efficient, doesn't depend on
Java, but is basically unmaintained now -- its prior maintainers have
moved on, so don't count on major new fixes and features to be
released any time soon, if ever. PhantomJS is also based on WebKit,
but from Qt -- and a rather outdated version of it at this point --
from around 2014. PhantomJS hasn't seen any code changes since
February, which is a good sign that the project is now entering its
abandonware phase.


Neither PhantomJS nor jBrowserDriver uses Chromium's V8 for
high-performance JavaScript, either, so they're significantly less CPU
efficient when it comes to churning through lots of JS code. They use
upstream WebKit's "JSCore", which isn't great. It's barely serviceable
and kind of (but not entirely) up to modern web standards.


If nothing in the headless category meets your needs (yet?), you can
always use any X-backed web browser with a virtual X server, as
mentioned. This is the more conventional way to do it, and comes at a
modest performance cost (that adds up if you use lots of instances),
but works fine for small to moderate sized workloads.

http://elementalselenium.com/tips/38-headless


If your question was rather how to manage running instances of Firefox
across many LXD containers while connecting to / managing them from a
centralized driver process, you could definitely use Selenium Grid for
that -- set up a single central Selenium Grid Hub somewhere, then set
up a Node on each LXD container, and have them all connect to the Hub.
Then, when you use RemoteWebDriver to connect to the hub, you'll be
able to get instances from 

Re: [lxc-users] discuss.linuxcontainers.org experiment

2017-04-25 Thread Sean McNamara
Ron,

If you are using LXD as part of line of business or mission critical
infrastructure for an enterprise, I would have expected that you would
already have purchased a comprehensive Ubuntu Advantage support plan
from Canonical. That's the most reliable way to get relevant,
up-to-date, "official" advice from Canonical as to best practices and
usability tips.

The point of Ubuntu Advantage is that you're getting "official" help
from the source, and IIRC it comes with a response time SLA so you can
be sure that if the developers get busy with deadlines, you'll still
get a response within X hours/days.

Full disclosure: I used to be an Ubuntu Advantage customer, and had a
good experience, but I have no financial or social incentive to
promote a Canonical offering... I just think it'd be good to have if
you don't have it already. And if you do have it, use it!

You can also ask on Discourse or the mailing list, but keep in mind
that Discourse and the mailing list are open to the user community, so
you're going to get "unofficial" responses that might be wrong or not
applicable to your situation (such as mine ;)).

To me, it would be a little weird to have some sort of officially
blessed set of Canonical-only official posts on the Discourse. Isn't
the purpose of the Discourse to be open to the community? (Including
posts by core devs, who might be Canonical employees, but are speaking
on behalf of themselves as an individual, not on behalf of the
company.)

If having the official advice of the company as a legal entity is
critical to you, I can only give you a positive endorsement of Ubuntu
Advantage as a fellow customer.

Sean




On Tue, Apr 25, 2017 at 4:01 PM, Ron Kelley  wrote:
> Stéphane,
>
> Thanks for setting up the discussion group.  I just joined…
>
> As a suggestion, it would be great if we could have an official “best 
> practices” section supported/endorsed by the Canonical team.  Or, a section 
> whereby people can contribute their designs and others can add their 
> viewpoints.  I know many people use LXC/LXD for home/personal use, but many 
> of use are using this technology in data center production environments.
>
> Some ideas off the top of my head:
> * How to manage tens/hundreds of LXD servers (single host, multi-host, or 
> multi-geo locations)
> * How to quickly find mis-behaving containers (consuming too much resources, 
> etc)
> * How to get container run-time stats per LXD server
> * Best practices when backing up, restoring, cloning containers
> * Best practices when deploying containers (same UID, different UID per 
> container, etc)
>
> As we adopt LXD more and more in our DC designs, it becomes increasingly 
> important for our organization to leverage best practices from the industry 
> experts.
>
> Thanks,
>
> -Ron
>
>
> On Apr 25, 2017, at 1:50 PM, Stéphane Graber  wrote:
>>
>> Hey there,
>>
>> We know that not everyone enjoys mailing-lists and searching through
>> mailing-list archives and would rather use a platform that's dedicated
>> to discussion and support.
>>
>> We don't know exactly how many of you would prefer using something like
>> that instead of the mailing-list or how many more people are out there
>> who would benefit from such a platform.
>>
>> But we're giving it a shot and will see how things work out over the
>> next couple of months. If we see little interest, we'll just kill it off
>> and revert to using just the lxc-users list. If we see it take off, we
>> may start recommending it as the preferred place to get support and
>> discuss LXC/LXD/LXCFS.
>>
>>
>> The new site is at: https://discuss.linuxcontainers.org
>>
>>
>> We support both Github login as well as standalone registration, so that
>> should make it easy for anyone interested to be able to post questions
>> and content.
>>
>> The site is configured to self-moderate, so active users who post good
>> content and help others will automatically get more privileges. That
>> should let the community shape how this space works rather than have me
>> and the core team babysit it :)
>>
>>
>> Discourse (the engine we use for this) supports notifications by e-mail
>> as well as responses and topic creation by e-mail. So for those of you
>> who don't like dealing with web stuff, you can tweak the e-mail settings
>> in your account and then interact with it almost entirely through
>> e-mails.
>>
>> Just a note on that bit, the plaintext version of those e-mails isn't so
>> great right now, it's not properly wrapped, contains random spacing and
>> the occasional html. I subscribed myself to receive all notifications
>> and will try to tweak the discourse e-mail code for those of us who use
>> mutt or other text-based clients.
>>
>>
>> Anyway, please feel free to post your questions over there, share
>> stories on what you're doing with LXC/LXD/LXCFS, ...
>>
>> We just ask that bug reports remain on Github. If a support question
>> turns out to be a bug, we'll file 

Re: [lxc-users] discuss.linuxcontainers.org experiment

2017-04-25 Thread Sean McNamara
It works now.

Thank you.

Sean

On Tue, Apr 25, 2017 at 2:16 PM, Stéphane Graber <stgra...@ubuntu.com> wrote:
> Hi,
>
> Can you try again now?
>
> Looks like some bad interaction between github and discourse due to https.
>
> Stéphane
>
> On Tue, Apr 25, 2017 at 02:03:26PM -0400, Sean McNamara wrote:
>> Reproduced the same error on a fresh install of Firefox 52.x ESR, with
>> zero add-ons/extensions and no cookies anywhere (had never used
>> Firefox before on this system). Logged into GitHub, then went to
>> discuss.linuxcontainers.org, then clicked "Sign Up", then clicked
>> "with GitHub". Message matches my previous post.
>>
>> Sean
>>
>> On Tue, Apr 25, 2017 at 1:57 PM, Sean McNamara <smc...@gmail.com> wrote:
>> > Having trouble registering a new account based on Github auth. I
>> > confirmed I'm logged into Github in another tab. Chrome latest stable
>> > on MacOS Sierra.
>> >
>> > Specific error (in a red box in a pop-up window) says: "Sorry, there
>> > was an error authorizing your account. Perhaps you did not approve
>> > authorization?"
>> >
>> > I never even get a prompt *to* approve authorization.
>> >
>> > Sean
>> >
>> >
>> > On Tue, Apr 25, 2017 at 1:50 PM, Stéphane Graber <stgra...@ubuntu.com> 
>> > wrote:
>> >> Hey there,
>> >>
>> >> We know that not everyone enjoys mailing-lists and searching through
>> >> mailing-list archives and would rather use a platform that's dedicated
>> >> to discussion and support.
>> >>
>> >> We don't know exactly how many of you would prefer using something like
>> >> that instead of the mailing-list or how many more people are out there
>> >> who would benefit from such a platform.
>> >>
>> >> But we're giving it a shot and will see how things work out over the
>> >> next couple of months. If we see little interest, we'll just kill it off
>> >> and revert to using just the lxc-users list. If we see it take off, we
>> >> may start recommending it as the preferred place to get support and
>> >> discuss LXC/LXD/LXCFS.
>> >>
>> >>
>> >> The new site is at: https://discuss.linuxcontainers.org
>> >>
>> >>
>> >> We support both Github login as well as standalone registration, so that
>> >> should make it easy for anyone interested to be able to post questions
>> >> and content.
>> >>
>> >> The site is configured to self-moderate, so active users who post good
>> >> content and help others will automatically get more privileges. That
>> >> should let the community shape how this space works rather than have me
>> >> and the core team babysit it :)
>> >>
>> >>
>> >> Discourse (the engine we use for this) supports notifications by e-mail
>> >> as well as responses and topic creation by e-mail. So for those of you
>> >> who don't like dealing with web stuff, you can tweak the e-mail settings
>> >> in your account and then interact with it almost entirely through
>> >> e-mails.
>> >>
>> >> Just a note on that bit, the plaintext version of those e-mails isn't so
>> >> great right now, it's not properly wrapped, contains random spacing and
>> >> the occasional html. I subscribed myself to receive all notifications
>> >> and will try to tweak the discourse e-mail code for those of us who use
>> >> mutt or other text-based clients.
>> >>
>> >>
>> >> Anyway, please feel free to post your questions over there, share
>> >> stories on what you're doing with LXC/LXD/LXCFS, ...
>> >>
>> >> We just ask that bug reports remain on Github. If a support question
>> >> turns out to be a bug, we'll file one for you on Github or ask for you
>> >> to go file one there (similar to what we've been doing on this list).
>> >>
>> >>
>> >> Hope this is a useful addition to our community!
>> >>
>> >> Stéphane
>> >>
>> >> ___
>> >> lxc-users mailing list
>> >> lxc-users@lists.linuxcontainers.org
>> >> http://lists.linuxcontainers.org/listinfo/lxc-users
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
> --
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] discuss.linuxcontainers.org experiment

2017-04-25 Thread Sean McNamara
Reproduced the same error on a fresh install of Firefox 52.x ESR, with
zero add-ons/extensions and no cookies anywhere (had never used
Firefox before on this system). Logged into GitHub, then went to
discuss.linuxcontainers.org, then clicked "Sign Up", then clicked
"with GitHub". Message matches my previous post.

Sean

On Tue, Apr 25, 2017 at 1:57 PM, Sean McNamara <smc...@gmail.com> wrote:
> Having trouble registering a new account based on Github auth. I
> confirmed I'm logged into Github in another tab. Chrome latest stable
> on MacOS Sierra.
>
> Specific error (in a red box in a pop-up window) says: "Sorry, there
> was an error authorizing your account. Perhaps you did not approve
> authorization?"
>
> I never even get a prompt *to* approve authorization.
>
> Sean
>
>
> On Tue, Apr 25, 2017 at 1:50 PM, Stéphane Graber <stgra...@ubuntu.com> wrote:
>> Hey there,
>>
>> We know that not everyone enjoys mailing-lists and searching through
>> mailing-list archives and would rather use a platform that's dedicated
>> to discussion and support.
>>
>> We don't know exactly how many of you would prefer using something like
>> that instead of the mailing-list or how many more people are out there
>> who would benefit from such a platform.
>>
>> But we're giving it a shot and will see how things work out over the
>> next couple of months. If we see little interest, we'll just kill it off
>> and revert to using just the lxc-users list. If we see it take off, we
>> may start recommending it as the preferred place to get support and
>> discuss LXC/LXD/LXCFS.
>>
>>
>> The new site is at: https://discuss.linuxcontainers.org
>>
>>
>> We support both Github login as well as standalone registration, so that
>> should make it easy for anyone interested to be able to post questions
>> and content.
>>
>> The site is configured to self-moderate, so active users who post good
>> content and help others will automatically get more privileges. That
>> should let the community shape how this space works rather than have me
>> and the core team babysit it :)
>>
>>
>> Discourse (the engine we use for this) supports notifications by e-mail
>> as well as responses and topic creation by e-mail. So for those of you
>> who don't like dealing with web stuff, you can tweak the e-mail settings
>> in your account and then interact with it almost entirely through
>> e-mails.
>>
>> Just a note on that bit, the plaintext version of those e-mails isn't so
>> great right now, it's not properly wrapped, contains random spacing and
>> the occasional html. I subscribed myself to receive all notifications
>> and will try to tweak the discourse e-mail code for those of us who use
>> mutt or other text-based clients.
>>
>>
>> Anyway, please feel free to post your questions over there, share
>> stories on what you're doing with LXC/LXD/LXCFS, ...
>>
>> We just ask that bug reports remain on Github. If a support question
>> turns out to be a bug, we'll file one for you on Github or ask for you
>> to go file one there (similar to what we've been doing on this list).
>>
>>
>> Hope this is a useful addition to our community!
>>
>> Stéphane
>>
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] discuss.linuxcontainers.org experiment

2017-04-25 Thread Sean McNamara
Having trouble registering a new account based on Github auth. I
confirmed I'm logged into Github in another tab. Chrome latest stable
on MacOS Sierra.

Specific error (in a red box in a pop-up window) says: "Sorry, there
was an error authorizing your account. Perhaps you did not approve
authorization?"

I never even get a prompt *to* approve authorization.

Sean


On Tue, Apr 25, 2017 at 1:50 PM, Stéphane Graber  wrote:
> Hey there,
>
> We know that not everyone enjoys mailing-lists and searching through
> mailing-list archives and would rather use a platform that's dedicated
> to discussion and support.
>
> We don't know exactly how many of you would prefer using something like
> that instead of the mailing-list or how many more people are out there
> who would benefit from such a platform.
>
> But we're giving it a shot and will see how things work out over the
> next couple of months. If we see little interest, we'll just kill it off
> and revert to using just the lxc-users list. If we see it take off, we
> may start recommending it as the preferred place to get support and
> discuss LXC/LXD/LXCFS.
>
>
> The new site is at: https://discuss.linuxcontainers.org
>
>
> We support both Github login as well as standalone registration, so that
> should make it easy for anyone interested to be able to post questions
> and content.
>
> The site is configured to self-moderate, so active users who post good
> content and help others will automatically get more privileges. That
> should let the community shape how this space works rather than have me
> and the core team babysit it :)
>
>
> Discourse (the engine we use for this) supports notifications by e-mail
> as well as responses and topic creation by e-mail. So for those of you
> who don't like dealing with web stuff, you can tweak the e-mail settings
> in your account and then interact with it almost entirely through
> e-mails.
>
> Just a note on that bit, the plaintext version of those e-mails isn't so
> great right now, it's not properly wrapped, contains random spacing and
> the occasional html. I subscribed myself to receive all notifications
> and will try to tweak the discourse e-mail code for those of us who use
> mutt or other text-based clients.
>
>
> Anyway, please feel free to post your questions over there, share
> stories on what you're doing with LXC/LXD/LXCFS, ...
>
> We just ask that bug reports remain on Github. If a support question
> turns out to be a bug, we'll file one for you on Github or ask for you
> to go file one there (similar to what we've been doing on this list).
>
>
> Hope this is a useful addition to our community!
>
> Stéphane
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] does running NTP in an LXC improve security?

2017-04-24 Thread Sean McNamara
First of all, an "unprivileged" container is still pretty insecure if
you don't have a proper Linux Security Module (LSM) enforcing
Mandatory Access Control to restrict what the container can do.

LXD takes a decent stab at integrating the AppArmor LSM and applies it
pretty well to secure and isolate unprivileged LXD guests out of the
box, especially on Ubuntu 16.04+ using recent LXD versions. Not so
much LXC; please search the list archives for this question being
asked many many times (at least once a week) if you are not 100% clear
on the difference between LXC and LXD.

You could probably further improve upon and lock down the rules LXD
applies to a container out of the box if you knew more specific
details about exactly what the container *should* (and shouldn't) be
able to do. You could provide these rules to AppArmor, which if you
were thorough would really reduce the usability of any exploits in the
container.

That said, any mechanism that enables an unauthorized user to execute
arbitrary machine code (shell code) on the CPU, where the exact
machine code is controlled by the attacker -- like injecting their own
binary -- is inherently dangerous, mainly because any breach into
kernelspace allows extremely privileged access to the system. If this
were a rarity, then we wouldn't (still) be seeing Android phones
getting rooted despite manufacturers' best attempts to keep their
kernel patched and impose Mandatory Access Control, like Samsung Knox.

LXC and LXD are really good for situations where you at least
partially trust the container not to maliciously attempt to break into
the host OS. But it's actually very difficult, apart from hardware
virtualization (putting your NTP server in its own domU), to prevent a
*compromised* process from eventually privilege-escalating itself up
to gaining root on the physical OS (the dom0, in your case). And even
then, Xen has had the occasional VM-busting exploit.

The attack surface is large enough, and the number and rate of
vulnerability discovery in the past has been high enough, that if
someone is running arbitrary code on your box, there's a good chance
that -- depending on their level of sophistication -- they'll be able
to escalate privilege in some way... almost certainly by a little, and
possibly by a whole lot. This is independent of whether you're running
LXD, LXC or hardware virtualization.

You _could_ call LXD and its AppArmor profiles "defense in depth"
compared to running an NTP server directly on the dom0, but I
certainly wouldn't claim that it's an impenetrable fortress designed
from the ground up to be fully isolated from the host and to
indefinitely contain the damage of a malicious user / compromised
process. Nor would I claim it would slow down an advanced threat by
very much. It was really designed with convenience in mind, and its
security is slowly improving, but I wouldn't say it's yet rigorous
enough that a sufficiently-paranoid military or intelligence
organization would trust it; so if they wouldn't trust it, you might
not want to yet, either.

Sean


On Mon, Apr 24, 2017 at 11:58 PM, Mike  wrote:
> I need to run NTP on a Xen dom0.  (I'm running it in the dom0 in order
> to have all the Xen guests and host synchronized.)  I'm concerned about
> remote code execution exploits via buffer overflows, for example.
> I have no experience with unprivileged LXCs yet.
>
> Would it provide useful protection of the dom0 to run the NTP daemon in
> an LXC?  Or should I not bother, because the daemon would have no lesser
> privileges anyway?
>
> I was trying to do this, but was encountering some conflicts with
> /proc/xen in starting the LXC.  (I didn't encounter this in a domU.)
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] basic understanding - clarification sought

2017-04-07 Thread Sean McNamara
LXD and LXC are basically separate, from a user's point of view. the
`lxc` command is actually LXD. `lxc` followed by a dash, like `lxc-ls`
is LXC. These are sometimes referred to (e.g. in Ubuntu packaging) as
lxc-1.0 (lxc-ls, etc.) and lxc-2.0 (LXD).

LXC containers are not too different from Docker; Docker used to use
liblxc as its base.

LXD containers are designed to feel more like a VM, yes. They _can_ be
slightly larger in size, depending, because they run an entire guest
OS minus the kernel, starting from the init daemon, all libraries,
etc.  But the difference in size isn't terrible if you have a
deduplicating filesystem, FS-level compression, or a small number of
containers (or just a huge amount of disk space). A few gigs per
container base image, at most.

I don't foresee any LXD _code_ ever being locked under a proprietary
license. Canonical doesn't really do that. They do have enterprise
support that you can pay for, but in that case, you are paying them
for services (technical advice and possibly individualized patches or
builds), not for source code or software licenses. The software itself
should remain free and open source, though any company (even a company
other than Canonical) could develop proprietary extensions or
integrations at any time if they wanted to. The license won't prevent
them from doing so. I just think it's unlikely in practice.

Sean


On Fri, Apr 7, 2017 at 10:47 PM, gunnar.wagner
 wrote:
> hi everybody,
>
> I am a novice to LXC/LXD and am trying to get a basic understanding
> together. I have grasped some things which I am not sure about whether I got
> them wrong or write.
> Maybe this groups is able and willing to confirm or set things straight for
> me
>
> if you run LXD the lxc commands used are different from the lxc commands
> used when running 'bare' lxc (for example 'lxc list'   vs   'lxc-ls
> --fancy')?
>
> LXD runs on the Apache License 2.0 (same as Docker engine) so it could
> happen the same thing to lxd (being divided into Community vc Enterprise
> Edition) any time (legally speaking. Who would be the force to decide on
> such a move? Canonical? Is there any intention to make such a move at any
> point in time?
>
> an LXC container behaves more like a VM then a docker or rkt container does
> (machine- vs app-container), correct? Is it also larger in size?
>
> thanks for clarifying
>
> Gunnar
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] would there be value in starting an LXD community online collection of how-to related information

2017-01-18 Thread Sean McNamara
On Wed, Jan 18, 2017 at 12:15 PM, Guido Jäkel  wrote:
>> Here is my opinion on it:
>>
>> 1) We do need documentation, especially tutorials. Lots and lots of
>> tutorials and how-tos . LXD and Docker compete in different niches, but
>> LXD can easily do what Docker does (and sometimes better in certain
>> situations) and part of the reason that Docker is used so much is
>> because of the volume of articles/tutorials for setting it up in
>> production scenarios.
>>
>> 2) I agree with the poster who mentioned that the archive is not
>> searchable (but should be IMO). Is there some way to make it a
>> searchable archive? I understand that there are software-tools that
>> allow the conversion of mail-archives into a forum-like appearance, so
>> that might help
>
> Dear all,
>
> I ask about such some years before. But in my opinion, a typical forum is not 
> much better than list mail -- I would suggest to present all the community 
> information as a Wiki space or something in that way!
>
> Because to my (mostly bad) experiences, in a forum the information is spread 
> around in different threads. And you have to read through pages of statements 
> to extract the "core" and/or "head" of information. In a typical Wiki, you'll 
> get the most up-to-date information on the main page, you may use an history 
> to extract changes and there might be a comment feature enabled for 
> discussions. And a typical wiki engine would allow to search in a 
> full-text-index, of corse.
>
> I think, one important topic for HowTo's is around networking. This is 
> because LXC/LXD (and others) become more and more easy to use. This is a 
> great success, but in the other hand it attract more and more users with a 
> lower level of basic skills.


Yes! Part of it is that LXD's network configuration steps are a bit
different from some of its predecessors in the container/virt space,
like OpenVZ and VMware, so people trying to "migrate" to LXD instead
get a migraine trying to learn new networking concepts so they can
wrap their head around the network configuration for LXD.

I currently run LXD in production with about 8 containers on a
non-mission-critical server (if it crashes, it's not the end of the
world), and things are going very well -- it's stable, secure enough
for my needs, great guest/host isolation, guests can do all the things
I expect them to be able to do, etc. -- but the one area that still
eludes me to this day is how exactly macvlan works.

To me, macvlan is just a magic black box. I followed a tutorial by
rote on one of stgraber's blogs and got it working perfectly, but if
it ever breaks, I won't be able to dig very deep in my troubleshooting
efforts due to the perceived opacity of the technology and the weird
way it works compared to traditional networking (physical interfaces
and bridges are about as far as I go, normally; but I don't even know
where macvlan sits in the OSI model!). I think we need some (better)
pictures/diagrams, analogies, concrete examples, etc. for some of the
more common networking topologies involving LXD, even if the "details"
of those networking setups are not a feature of LXD itself (if we are
borrowing features from the Linux kernel and expecting our users to
use them to accomplish tasks, it would be helpful to explain how those
features work in a user-friendly way.)

And I'd like to see "LXD hardening" guides for doing system-level
configuration unrelated to LXD to increase the security of LXD. For
example, if you are exposing a public /27 IP subnet to a bunch of LXD
containers through macvlan, out of the box there is nothing preventing
one of those containers' root users from acquiring and using any IP
address within that /27. This is a very bad thing for hosting
companies with multi-tenant boxes, so we just have to hope that nobody
is going to try and start a hosting company based on LXD and then
launch their service without locking that down. Ideally, the host-side
configuration would specify which exact IP(s) each container can use,
and the container's root user can do nothing to circumvent those
restrictions.

In the end, what I envision LXD as capable of becoming, is something
equivalent from a user's perspective to VMware, where all the
sysadmin's assumptions about isolation and security in VMware (except
sharing the same kernel) also hold in LXD, except that since you're
not doing virtualization, your performance overhead is extremely low.
And in practice, the "sharing the same kernel" bit *shouldn't* be a
security problem, because if the Linux kernel security team does their
job, the vulnerabilities will get found and closed in short order so
your customers can't break out of the container. Then you just have to
enable rebootless kernel updates on the host and you're good to go.

But LXD out of the box is a long way from that. It can probably be
made to work that way if you know enough about which isolation
weaknesses exist and how to close them using 

Re: [lxc-users] How to open a ticket with LXC

2016-11-07 Thread Sean McNamara
LXC: https://github.com/lxc/lxc/issues/new
LXD: https://github.com/lxc/lxd/issues/new

(Be sure to know which project your issue applies to before opening an issue.)

Sean


On Mon, Nov 7, 2016 at 1:10 PM, Saint Michael  wrote:
> Does anybody know how to open a bug with LXC?
> I cannot figure it out. Ubuntu does point me to another site, but I cannot
> see how to open a new ticket.
>
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] openvz to lxd/lxc

2016-09-12 Thread Sean McNamara
Firstly you will need to decide whether you want to use lxc, or lxd.
There is no such thing as "lxd/lxc" as the two tools are completely
separate, pretty different in behavior, and the interface for
interacting with them is completely different.

However, in the general case of any arbitrary Linux-based OS in an
OpenVZ container, there's a fairly high chance that some random
service or another will fail (for a multitude of reasons) if you try
to boot its disk image under lxc or lxd; indeed, it may not even boot.

It depends on what your container's OS is, though. An OpenVZ image of,
say, Ubuntu 16.04, *might* be a better fit for direct copying into an
lxc or lxd container, though you would still have to at least adjust
things like networking, user and group IDs, and things like that.

There is no general-purpose "command structure" that will work with
*all* OpenVZ containers across *all* OpenVZ versions and allow
successful, problem-free importing into any arbitrary version of lxc
or lxd, though. You can try copying the raw files of the image into an
lxd or lxc container you've created from scratch and then see if it
boots, and fix it where it fails; or you can take the (IMO) easier
path, and create a new container and then copy over the files you need
via the host.


On Mon, Sep 12, 2016 at 1:33 PM, Steven Spencer  wrote:
> Greetings,
>
> Is there a command structure that will allow for an import of an openvz
> container into lxd/lxc or is the best method to use a new container and
> rsync any content needed?
>
> Thanks,
> Steven G. Spencer
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] clarification

2016-08-11 Thread Sean McNamara
On Thu, Aug 11, 2016 at 8:12 PM, Worth Spending  wrote:
> I'm currently reading thru the documentation at: https://linuxcontainers.org
> to learn lxc.
>
> There seems to be multiple ways of running lxc commands.
>
> lxc-start, lxc-stop, lxc-attach, lxc-ls


The "hyphenated" commands are from the "legacy" LXC command line interface.


>
> or lxc with sub commands.
>
> lxc start
>
> lxc stop
>
> lxc list

The "non-hyphenated" commands are for the **LXD** (D, not C) container
hypervisor. This is a completely different product/application than
LXC. The LXD client binary, `lxc`, is extremely unfortunately named
and thus very confusing for new users, which has been discussed about
9000 times on this mailing list.




>
> So, the question is: What is the current preferred usage for lxc commands?
> hyphenated commands or lxc with sub commands?


You need to look into the benefits and drawbacks of using either LXC
or LXD (consider each one separately in terms of what it offers, how
it's implemented, and how it's used) and make a decision. If you use
the hyphenated LXC commands, any containers you create in that
environment will be completely invisible to LXD, and vice versa. They
each keep track of containers differently so LXD does not know about
LXC containers and LXC does not know about LXD containers.


>
> Thanks...
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] is LXD production ready ??

2016-03-21 Thread Sean McNamara
On Mon, Mar 21, 2016 at 11:15 PM, Mahesh Patade  wrote:
> Hi All,
>
> We are planning to have LXD as our virtualization layer for production
> systems. Currently we are using Xenserver 6.
>
> I want to know pros and cons.

LXD 2.0 will probably be "mostly production ready", but I wouldn't be
surprised if you see a few quick point releases with important fixes
in the first month or two after it's released.

If I were doing this for a commercial purpose (for business), I would
implement a plan like this:

1. Set up some test hardware to run LXD on.
2. When Ubuntu 16.04 LTS is released, install LXD and evaluate.
3. Work out any issues you have. Seek commercial support from
Canonical if the open source venue doesn't resolve your issues.
4. Perform extensive testing. If you don't find any issues, then after
3 or 4 months you might consider making this your production
environment.

If you're doing anything safety critical, mission critical or life
critical, I would extend that 3 to 4 months time frame to at least a
year, though your organization probably should already have a policy
around very formalized testing for that kind of industry.

I wouldn't recommend using LXD 2.0 in a commercial / mission-critical
production environment outside of Ubuntu 16.04 LTS, because it has
several important integration points into the distro (kernel version,
libraries, kernel patches, etc.) -- you'd have to do very extensive
testing to certify it on another distro. Canonical's doing most of
that work for you on Ubuntu, though.

If you're really risk averse, give it at least another year...


Sean

>
> thanks,
> Mahesh
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to setup a static IP in a container with LX[C|D] 2.0.0.*

2016-03-19 Thread Sean McNamara
On Fri, Mar 18, 2016 at 12:09 PM, Sean McNamara <smc...@gmail.com> wrote:
> On Fri, Mar 18, 2016 at 11:43 AM, Stéphane Graber <stgra...@ubuntu.com> wrote:
>> Our stance hasn't changed. LXD doesn't know nor care about layer-3
>> networking, all it does is setup your layer-2.
>>
>> Having LXD pre-initialize your network namespace confuses the heck out
>> of a bunch of distros which expect all network to be unconfigured by the
>> time they apply their own config (they don't clean things up so
>> duplicate entries lead to failure).
>
>
> Okay.
>
> As someone migrating from OpenVZ (and before that, VMware), one
> important use case I was expecting of LXD is that of multi-tenant
> boxes, where you need to give root access to a container to the
> "tenant", and expect them to adhere to a Terms of Service agreement,
> but need to have technical mitigations in place, so that even if they
> decide to violate the ToS (or innocently have their box hacked by a
> malicious third-party who decides to violate the ToS), access to other
> containers and the physical box (host OS) is very difficult to
> impossible (pending any undiscovered vulnerabilities or host-side
> misconfiguration).
>
> As part of that, I was expecting some way to tell LXD to restrict the
> IP addresses that can be claimed/used by a given container. For
> instance, if I have a public Internet IPv4 /26 allocated to a physical
> host by a hosting provider, I'll want to assign only one or two IP
> addresses to each container. Currently, I can have an LXD container
> just spuriously decide to use any arbitrary IP, and I haven't found a
> way to prevent it from doing that if an untrusted user has root access
> in the container. They can just run ifconfig and specify the IP
> address they want to use.
>
> How can I configure the host environment (LXD or something else on the
> host, assuming I'm running a very recent Ubuntu 16.04 Beta nightly) so


Just wanted to clarify that I am *not* using or intending to use a
pre-release of 16.04 in a production environment. I'm currently
satisfied with LXD 0.24 on Ubuntu Server 14.04.4 LTS. I'm not
currently in a situation where I have untrusted root users with access
to containers, but I am planning to open up that type of usage in the
future if LXD turns out to be able to support it. And of course that
would be using the final release of Ubuntu Server 16.04 LTS.

Thanks,

Sean


> that no packets can be transmitted to/from the guest unless the guest
> is using a specific IP or set of IPs? I also want to make sure that no
> broadcasting is occurring; i.e., the root user in the container should
> not be able to sniff layer 2 and see all the packets going to all the
> other containers.
>
> ...Or is LXD not suitable for this use case? If it isn't, will it ever be?
>
> Thanks,
>
> Sean
>
>
>
>>
>>
>> Nevertheless, we have recently allowed the following key through raw.lxc:
>>  - lxc.network.X.ipv4
>>  - lxc.network.X.ipv4.gateway
>>  - lxc.network.X.ipv6
>>  - lxc.network.X.ipv6.gateway
>>
>> Note that we require you set the interface index (X above) as mixing
>> those raw entris with the LXD generated config would otherwise randomly
>> cause an invalid config and container startup failure.
>>
>>
>> The recommended way to manage IPs with LXD is to do it exactly the same
>> way you would do it for your VMs or physical machines, so either
>> configure your DHCP server to give a static lease or configure the
>> container to use a static IP (you can use lxc file pull/push/edit to do
>> it on a stopped container).
>>
>> On Fri, Mar 18, 2016 at 10:18:33AM -0400, Sean McNamara wrote:
>>> First of all, there's no such thing as LX[C|D]. You're either using
>>> LXC or LXD. They're different enough in their configuration and
>>> operation that you can't ask an "either-or" question. Pick one
>>> solution and focus on that.
>>>
>>> I just wanted to chime in to say that I have this same question. I'm
>>> stuck using a pre-2.0 release of LXD because it allows me to use the
>>> "raw.lxc" config parameter to specify the IP settings for the guest.
>>> This configuration parameter was removed at some point prior to the
>>> 2.0 RC, so I ended up editing the source code of LXD to bring it back.
>>> I haven't found any equivalent configuration that works without using
>>> raw.lxc.
>>>
>>> raw.lxc: 
>>> "lxc.network.ipv4=1.2.3.4/32\nlxc.network.ipv4.gateway=5.6.7.8\nlxc.network.hwaddr=00:11:22:33:44:55\nlxc.network.flags=up
>>> \ \nlxc.network.mtu=1500\n"
>>&

Re: [lxc-users] How to setup a static IP in a container with LX[C|D] 2.0.0.*

2016-03-19 Thread Sean McNamara
First of all, there's no such thing as LX[C|D]. You're either using
LXC or LXD. They're different enough in their configuration and
operation that you can't ask an "either-or" question. Pick one
solution and focus on that.

I just wanted to chime in to say that I have this same question. I'm
stuck using a pre-2.0 release of LXD because it allows me to use the
"raw.lxc" config parameter to specify the IP settings for the guest.
This configuration parameter was removed at some point prior to the
2.0 RC, so I ended up editing the source code of LXD to bring it back.
I haven't found any equivalent configuration that works without using
raw.lxc.

raw.lxc: 
"lxc.network.ipv4=1.2.3.4/32\nlxc.network.ipv4.gateway=5.6.7.8\nlxc.network.hwaddr=00:11:22:33:44:55\nlxc.network.flags=up
\ \nlxc.network.mtu=1500\n"
  volatile.eth0.hwaddr: 00:11:22:33:44:55
  volatile.eth0.name: eth1
devices:
  eth0:
hwaddr: 00:11:22:33:44:55
nictype: bridged
parent: br0

On Ubuntu, you can then set up your bridge as follows in
/etc/network/interfaces:

auto br0
iface br0 inet static
address 1.2.3.4
netmask 255.255.255.0
broadcast 5.6.7.8
gateway 9.10.11.12
bridge_ports eth0
bridge_stp off


This is fine with LXD 0.24 that was built about a month before the 2.0
release candidates started hitting (and with edited source code to
un-block the raw.lxc param) but I'm afraid to upgrade to LXD 2.0
because I don't know the way forward.

It seems like support for certain basic network topologies are still
being worked out with LXD. It should be easy, well-documented and
flexible a la OpenVZ, but it's really not, as far as I have seen. The
best way to make any progress that I've found thus far is to start
learning Google Go and reading the source code.

Thanks,

Sean



On Fri, Mar 18, 2016 at 9:10 AM, Hans Deragon  wrote:
> Greetings,
>
> Ok, this is ridiculous and I apologize for asking help for such a simple
> task, but I fail to find the answers by myself.  I fail to find proper
> documentation to setup bridge networking and static IP.  Newbie here btw and
> setup details at the end of this email.
>
> I got the container running and with DHCP configured, it has its own IP
> which the host can address with.
>
> Obviously, I attempted to setup the static IP many times following
> instructions found on many web pages, to no vail.  For example, I followed
> instructions from https://wiki.debian.org/LXC/SimpleBridge.  But turns out
> that I am probably running a different version of LXC and that this page is
> now obsolete.
>
> I went so far to run 'strace lxc restart server2' to realize that
> /var/lib/lxc/server2/config is not read (server2 is the container).  This
> seams to be confirmed by the post at
> http://ubuntuforums.org/showthread.php?t=2275372.
>
> I found 'man lxc.container.conf'.  Seams promising.  However, I fail to find
> within the manual the path where this file should be saved!  If you write
> documentation, please always provide the path where configuration files are
> supposed to be stored.
>
> I created a profile named 'bridged' using commands, but I have not found any
> option/instruction on how to apply that profile on my existing image.  'lxc
> start server2' does not provide any option to start the container with a
> particular profile.  BTW, where are profile configuration files stored?
>
> I need clear step by step instructions, with full paths on how to set things
> up and I fail to find any on the web.  Anybody has a useful link to suggest?
>
> I have a KVM image running (server1) and it works flawlessly with a static
> IP on my bridge.  And it wasn't hard to find instructions on how to set it
> up.  But LXD/LXc is another story.
>
> The setup:
>
> Host:   Ubuntu 14.04 LTS.
> Container:  Ubuntu 14.04 LTS.
> LXD:2.0.0~rc3-0ubuntu4~ubuntu14.04.1~ppa1
> LXC:2.0.0~rc10-0ubuntu2~ubuntu14.04.1~ppa1
>
> Best regards and thanks in advance,
> Hans Deragon
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to setup a static IP in a container with LX[C|D] 2.0.0.*

2016-03-19 Thread Sean McNamara
On Fri, Mar 18, 2016 at 11:43 AM, Stéphane Graber <stgra...@ubuntu.com> wrote:
> Our stance hasn't changed. LXD doesn't know nor care about layer-3
> networking, all it does is setup your layer-2.
>
> Having LXD pre-initialize your network namespace confuses the heck out
> of a bunch of distros which expect all network to be unconfigured by the
> time they apply their own config (they don't clean things up so
> duplicate entries lead to failure).


Okay.

As someone migrating from OpenVZ (and before that, VMware), one
important use case I was expecting of LXD is that of multi-tenant
boxes, where you need to give root access to a container to the
"tenant", and expect them to adhere to a Terms of Service agreement,
but need to have technical mitigations in place, so that even if they
decide to violate the ToS (or innocently have their box hacked by a
malicious third-party who decides to violate the ToS), access to other
containers and the physical box (host OS) is very difficult to
impossible (pending any undiscovered vulnerabilities or host-side
misconfiguration).

As part of that, I was expecting some way to tell LXD to restrict the
IP addresses that can be claimed/used by a given container. For
instance, if I have a public Internet IPv4 /26 allocated to a physical
host by a hosting provider, I'll want to assign only one or two IP
addresses to each container. Currently, I can have an LXD container
just spuriously decide to use any arbitrary IP, and I haven't found a
way to prevent it from doing that if an untrusted user has root access
in the container. They can just run ifconfig and specify the IP
address they want to use.

How can I configure the host environment (LXD or something else on the
host, assuming I'm running a very recent Ubuntu 16.04 Beta nightly) so
that no packets can be transmitted to/from the guest unless the guest
is using a specific IP or set of IPs? I also want to make sure that no
broadcasting is occurring; i.e., the root user in the container should
not be able to sniff layer 2 and see all the packets going to all the
other containers.

...Or is LXD not suitable for this use case? If it isn't, will it ever be?

Thanks,

Sean



>
>
> Nevertheless, we have recently allowed the following key through raw.lxc:
>  - lxc.network.X.ipv4
>  - lxc.network.X.ipv4.gateway
>  - lxc.network.X.ipv6
>  - lxc.network.X.ipv6.gateway
>
> Note that we require you set the interface index (X above) as mixing
> those raw entris with the LXD generated config would otherwise randomly
> cause an invalid config and container startup failure.
>
>
> The recommended way to manage IPs with LXD is to do it exactly the same
> way you would do it for your VMs or physical machines, so either
> configure your DHCP server to give a static lease or configure the
> container to use a static IP (you can use lxc file pull/push/edit to do
> it on a stopped container).
>
> On Fri, Mar 18, 2016 at 10:18:33AM -0400, Sean McNamara wrote:
>> First of all, there's no such thing as LX[C|D]. You're either using
>> LXC or LXD. They're different enough in their configuration and
>> operation that you can't ask an "either-or" question. Pick one
>> solution and focus on that.
>>
>> I just wanted to chime in to say that I have this same question. I'm
>> stuck using a pre-2.0 release of LXD because it allows me to use the
>> "raw.lxc" config parameter to specify the IP settings for the guest.
>> This configuration parameter was removed at some point prior to the
>> 2.0 RC, so I ended up editing the source code of LXD to bring it back.
>> I haven't found any equivalent configuration that works without using
>> raw.lxc.
>>
>> raw.lxc: 
>> "lxc.network.ipv4=1.2.3.4/32\nlxc.network.ipv4.gateway=5.6.7.8\nlxc.network.hwaddr=00:11:22:33:44:55\nlxc.network.flags=up
>> \ \nlxc.network.mtu=1500\n"
>>   volatile.eth0.hwaddr: 00:11:22:33:44:55
>>   volatile.eth0.name: eth1
>> devices:
>>   eth0:
>> hwaddr: 00:11:22:33:44:55
>> nictype: bridged
>> parent: br0
>>
>> On Ubuntu, you can then set up your bridge as follows in
>> /etc/network/interfaces:
>>
>> auto br0
>> iface br0 inet static
>> address 1.2.3.4
>> netmask 255.255.255.0
>> broadcast 5.6.7.8
>> gateway 9.10.11.12
>> bridge_ports eth0
>> bridge_stp off
>>
>>
>> This is fine with LXD 0.24 that was built about a month before the 2.0
>> release candidates started hitting (and with edited source code to
>> un-block the raw.lxc param) but I'm afraid to upgrade to LXD 2.0
>> because I don't know the way forward.
>>
>> It seems like support for certai

Re: [lxc-users] LXC Security?

2016-03-02 Thread Sean McNamara
On Wed, Mar 2, 2016 at 2:37 PM, Ingo Baab  wrote:
> Hello LXC-Users,
>
>  I just started to experiment with LXC/LXD and now I am looking for a good
> starting point (some kind of "cookbook") to get UN-priviledged containers
> managed. I am a little confused by lxc versus the (older?) lxc-* commands.
> Are they "different systems"? How are they related?

The legacy lxc-* utilities are a separate system. In my opinion the
"lxc" command is very, VERY poorly named, because it actually serves
as a client for lxd, which is a userspace layer on top of the base lxc
(which is built on a whole set of kernel features, and some very low
level support). I'd call it lxd-client or lxdc or something *other
than* "lxc", unless the long-term plan is to deprecate all the lxc
userspace utilities in favor of lxd's client utility and subsume lxd
into the lxc project as the supported way forward.

The legacy utilities manage container storage and metadata differently
than the lxd system does. The data is in different directories and
stored in incompatible formats.


>
> I need:
> - A Cookbook for securing LXC

The cookbook for securing LXC is basically to use *LXD* (through,
confusingly, the lxc command) and run unprivileged containers. In
theory, the latest version of LXD on an OS with it fully integrated
into the distro (like the upcoming Ubuntu 16.04) should be pretty
secure.

...Though if production-grade, multi-tenant boxes where the tenants
are mutually untrusting is part of your use case, you might want to
seriously consider holding off on lxd until at least a few CVEs have
been filed against it. Given the codebase size, probability favors at
least a few vulns that will probably be shaken out over time.


> - How are (the older) lxc-* and lxc/lxd related?
>
> Thynk you in advance,
> Ingo Baab
>
> _
> Already read here and there..
> https://wiki.ubuntu.com/LxcSecurity
> https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-security
> https://linuxcontainers.org/lxc/security/
> https://www.sans.org/reading-room/whitepapers/linux/securing-linux-containers-36142
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD networking between guest, host, and KVM guest

2015-10-08 Thread Sean McNamara
I found the solution... I just needed to add static routes that set
the /27 as the "next hop" _instead of_ going through the gateway. The
gateway doesn't seem to want to route traffic between VMs on my box.
Can't say I blame it; that's unnecessary load on the hosting
provider's equipment. Instead, I'm simply taking advantage of the
layer 2 bridge that brings my host and all the guests and VMs
together, by telling the routing table that it's OK to directly hit
that IP as the next hop.

Appreciate the help. It got me looking for the right things and
eventually found the answer.


Sean



On Thu, Oct 8, 2015 at 3:27 AM, Fajar A. Nugraha <l...@fajar.net> wrote:
> On Thu, Oct 8, 2015 at 4:47 AM, Sean McNamara <smc...@gmail.com> wrote:
>> Here's an example from LXD config, where the following placeholders
>> are used to mask my specific information:
>>
>
>> "1.2.3.4"
>> "5.6.7.255"
>> "DEFAULT_GATEWAY">
>> "de:ad:be:ef"
>> "MAIN"
>
> all those obfuscation makes my head hurt.
>
>>   raw.lxc: "lxc.network.ipv4=1.2.3.4/32
>> 5.6.7.255\nlxc.network.ipv4.gateway=DEFAULT_GATEWAY\nlxc.network.hwaddr=de:ad:be:ef\nlxc.network.flags=up
>> \  \nlxc.network.mtu=1500\n"
>
> /32 should not have a broadcast address. Doesn't matter if the
> original /27 has a broadcast address, once you use /32, then the
> original broadcast address doesn't apply anymore as everything has to
> go thru the gateway.
>
> On a normal lxc (not lxd), I simply use this
>
> lxc.network.ipv4 = 50.30.36.58/32
> lxc.network.ipv4.gateway = 10.0.0.1
>
> and the result from inside the container:
> # ip ad li eth0
> 96: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
>   link/ether 00:16:3e:c7:b9:d6 brd ff:ff:ff:ff:ff:ff
> inet 50.30.36.58/32 brd 255.255.255.255 scope global eth0
>valid_lft forever preferred_lft forever
>
> # ip route
> default via 10.0.0.1 dev eth0
> 10.0.0.1 dev eth0  scope link
>
> I'm guessing your broadcast setting caused the problem. Try removing
> it on two containers first, and see if they can ping each other. A
> "traceroute" between the two containers should also show that traffic
> goes THRU the gateway instead of directly to the other container's IP.
>
> --
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC with pulseaudio

2015-07-23 Thread Sean McNamara
On Thu, Jul 23, 2015 at 1:36 PM, Matlink matl...@matlink.fr wrote:

 Hi,
 I've got pulseaudio working with an app ran in an LXC container (based
 on this https://www.stgraber.org/2014/02/09/lxc-1-0-gui-in-containers/).
 However, since I run this app in container, pulseaudio seems to be
 locked by this app, and any other app than tries to output sound with
 pulseaudio becomes stuck.
 Is there any conf for pulseaudio to stop apps to lock the socket ?



The problem is almost assuredly not that pulseaudio itself locks up when
having multiple clients connect. The likely problem is that your underlying
sound card does not support hardware mixing, and thus only one ALSA client
can connect to it at any given time. This is actually a hardware limitation
that transcends all concepts of virtual machine, container, etc. and can
only be resolved by a hypothetical (non-existent) in-kernel software mixer.
Until/unless such a thing exists, you must only have one PulseAudio daemon
running on your system and accessing the sound card directly, at any given
time.

Lacking that, though, it is definitely possible to use PulseAudio on
multiple containers. Unfortunately, to do this, you will need to
*partially* break guest isolation, by allowing guests to connect over TCP
to the PulseAudio server on the host. So each of your LXC guests will have
no PulseAudio server, only a PulseAudio client configuration file to make
it talk to the main (only) PulseAudio server.

The process of setting up remote PulseAudio is not unique to LXC and is
therefore a bit off-topic for this list. On the PA Wiki you'll find plenty
of information about how to do this, along with several diverse options.
http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Network/

-Sean





 --
 Matlink - Sysadmin matlink.fr


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users