Re: [lxc-users] LXD move container to another pool ?

2018-08-10 Thread Mark Constable

On 10/08/18 17:20, Pierre Couderc wrote:

Note that lxc1 and lxd from snap uses different directories than
lxd from package.


Sorry for the noise : I use lxd (from sources on debian), and I had
not seen that /var/lib/lxc exits but is empty...


FWIW /var/lib/lxc != /var/lib/lxd
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD 3.0 macvlan networking

2018-05-05 Thread Mark Constable

On 5/6/18 4:04 AM, Michel Jansens wrote:

how come I can ping the container from my host when I just set up
that container using macvlan?


Well, on my system with latest install of Ubuntu 18.04 and LXD 3.0, 
the host can’t reach a container in macvlan setup. the container 
can’t connect to the host either. on a bridged network, it works.


Thanks for the sanity check. That's what I thought should happen.

The only thing I can think of is that I was previously running a
bridge so my host eth device may have still been in promiscuous mode.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD 3.0 macvlan networking

2018-05-05 Thread Mark Constable

On 5/5/18 5:43 PM, Janjaap Bos wrote:

To be able to ping a container macvlan interface, you need to have a
macvlan interface configured on the host.


Thank you for the host macvlan snippet but I CAN actually ping the
container from the host (but not the host from inside the container)
and that was actually my question... how come I ping the
container from my host when I just set up that container using
macvlan?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD 3.0 macvlan networking

2018-05-04 Thread Mark Constable
Has something changed re networking with LXD 3.0 such that when
using a macvlan that the host CAN ping a container?

According to what I previously understood, and supported by this
comment..

https://github.com/lxc/lxd/issues/3871#issuecomment-333124249

and the main reason I hadn't bothered even trying out a macvlan
is because I need access to my local hosted containers and it
"just works" with a normal bridge. However, now when I finally
get around to testing macvlan I find I can immediately ping a
new macvlan based containers IP.

Has something changed recently regarding this macvlan restriction?

~ apt install lxd

~ lxc profile copy default macvlan (which has no eth0 device yet)

~ ip r (to get my hosts eth0 device)

~ lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp4s0f1 
name=eth0

~ lxc launch images:ubuntu/bionic macvlantest -p macvlan

~ lxc list --format csv
macvlantest,RUNNING,192.168.0.206 (eth0),"fdcc:3922:7dfd::6b7 (eth0)
fdcc:3922:7dfd:0:216:3eff:fe11:9335 (eth0)",PERSISTENT,0

~ ping -c1 192.168.0.206
PING 192.168.0.206 (192.168.0.206) 56(84) bytes of data.
64 bytes from 192.168.0.206: icmp_seq=1 ttl=64 time=1.98 ms


OIC, from inside the macvlantest container I can't ping the host.

But still, from this comment I would tend to assume I should not
be able to ping the container from the host either...

"@stgraber An even easier alternative to this would be using macvlan as it 
won't require any bridging at all, but it does come with the annoying caveat 
that the host will not be able to communicate with the containers."

Would anyone care to clarify this macvlan limitation please?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] bionic image not getting IPv4 address

2018-05-03 Thread Mark Constable

On 5/3/18 4:09 PM, Kees Bos wrote:
> On Thu, 2018-05-03 at 12:58 +0900, Tomasz Chmielewski wrote:
>> Reproducing is easy:
>> 
>> # lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp
>> 
>> Then wait a few secs until it starts - "lxc list" will show it has 
>> IPv6 address (if your bridge was configured to provide IPv6), but
>> not IPv4 (and you can confirm by doing "lxc shell", too):
>
> I can confirm this. Seeing the same issue.

Works as I would expect for me because I am using a profile that has a
network attached... ie; it's not a problem with the bionic image.

mbox ~ lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp -p medium
Creating bionic-broken-dhcp
Starting bionic-broken-dhcp 
mbox ~ lx
++-+--+
|NAME|  STATE  | IPV4 |
++-+--+
| bionic-broken-dhcp | RUNNING | 192.168.0.129 (eth0) |
++-+--+


mbox ~ lxc profile show medium
config:
  limits.cpu: "2"
  limits.memory: 500MB
description: ""
devices:
  eth0:
nictype: bridged
parent: lxdbr0
type: nic
  root:
path: /
pool: lxd-pool
size: 5000MB
type: disk
name: medium
used_by:
- /1.0/containers/bionic-broken-dhcp
- /1.0/containers/c2
- /1.0/containers/uc1


pEpkey.asc
Description: application/pgp-keys
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] bionic image not getting IPv4 address

2018-05-02 Thread Mark Constable
On 5/3/18 12:42 PM, Tomasz Chmielewski wrote:
> > Today or yesterday, bionic image launched in LXD is not getting an IPv4 
> > address. It is getting an IPv6 address.

If you do a "lxc profile show default" you will probably find it doesn't
have an IPv4 network attached by default. I haven't yet found a simple
step by step howto example of how to setup a network for v3.0 but in my
case I use a bridge on my host and create a new profile that includes...

lxc network attach-profile lxdbr0 [profile name] eth0

then when I manually launch a container I use something like...

lxc launch images:ubuntu-core/16 uc1 -p [profile name]




pEpkey.asc
Description: application/pgp-keys
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] [Solved] Re: How to get rid of pesky extra dhcp IP

2018-04-21 Thread Mark Constable
On 22/4/18 3:21 am, David Favor wrote:
> Removing Netplan will work temporarily, until all the old networking
> plumbing is completely removed. Better to start moving to Netplan
> now, before some future update removes old processing of your
> /etc/network/interfaces files + all your networking simply stops
> working.

FWIW I completely remove python, therefor netplan, in my lightweight
containers and I find systemd-networkd works just fine as a replacement
of the old ifupdown package. I can't see it ever being removed so I'd
say it's a safe lightweight long-term substitute for netplan.

~ cat /etc/systemd/network/20-dhcp.network
[Match]
Name=e*

[Network]
DHCP=ipv4



-- 
/Cheers,/
Contact Mark  at RentaNet  on 0419 
530 037 
RentaNet
/Domain, Web, Mail and Storage Hosting/



pEpkey.asc
Description: application/pgp-keys
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC containers networking

2018-04-05 Thread Mark Constable
On 06/04/18 03:33, Bhangui, Avadhut Upendra wrote:
> I have a requirement that the solution running inside the container 
> should be able to communicate to services in public cloud and also 
> with some services on the host machine.
> 
> 1. How do I setup the networking of this container? 2. When it will 
> try to communicate to the service on the host machine, will request 
> be routed to machine over the physical network?

IMHO the simplest solution is to provide a "bridge" connection to your
eth device (wifi won't work) on your host. This way your containers
will get an IP from your LAN router and be available from every other
device on your internal LAN. If you then port forward to one of the
container IPs from your router then it's live on the 'net.

If using *buntu then make sure the bridge-utils package is installed
and if using a normal host desktop with NetworkManager then try these
two config files (change enp4s0f1 to your eth device, and address1)...


~ cat /etc/NetworkManager/system-connections/lxdbr0
[connection]
id=lxdbr0
uuid=2140d6a8-fb95-4d93-9488-58b64e216b81
type=bridge
interface-name=lxdbr0
permissions=

[bridge]
stp=false

[ipv4]
address1=192.168.X.XX/24,192.168.X.1
dns=1.1.1.1;
dns-search=local.lan;
method=manual

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=ignore


~ cat /etc/NetworkManager/system-connections/enp4s0f1
[connection]
id=bridge-slave-enp4s0f1
uuid=f9691217-52c2-499e-b310-d5ccd7e1373f
type=ethernet
interface-name=enp4s0f1
master=lxdbr0
permissions=
slave-type=bridge

[ethernet]
auto-negotiate=true
mac-address=80:FA:5B:00:2C:48
mac-address-blacklist=

[ipv4]
dns-search=
method=link-local

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=auto


OR... if using systemd-networkd then try these...


~ cat /etc/systemd/network/MyBridge.netdev
[NetDev]
Name=lxdbr0
Kind=bridge


~ cat /etc/systemd/network/MyBridge.network
[Match]
Name=lxdbr0

[Network]

#DHCP=ipv4
Address=192.168.X.XX/24
Gateway=192.168.X.XX
DNS=1.1.1.1


~ cat /etc/systemd/network/MyEth.network
[Match]
Name=e*

[Network]
Bridge=lxdbr0



pEpkey.asc
Description: application/pgp-keys
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Can't setgid in container

2018-02-27 Thread Mark Constable

LXD 2.21 with *buntu a 1 month old bionic host and new containers. When
installing something like postfix I am getting this error, which obviously
cripples postfix...

postfix/postfix-script: warning: not set-gid or not owner+group+world 
executable: /usr/sbin/postqueue
postfix/postfix-script: warning: not set-gid or not owner+group+world 
executable: /usr/sbin/postdrop

I have not touched these on the host but googling seems to indicate something
with mismatched sub?ids might be a cause. chmod 02555 does nothing.

~  cat /etc/subuid
sysadm:10:65536
markc:165536:65536
lxd:231072:65536
root:231072:65536

~ cat /etc/subgid
sysadm:10:65536
markc:165536:65536
lxd:231072:65536
root:231072:65536

I've tried some strace'ing but my lxd-foo is not up to scratch. Any ideas?
<>___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD module for Cockpit?

2017-06-14 Thread Mark Constable

On 15/06/17 12:56, Stéphane Graber wrote:

https://lists.linuxcontainers.org/pipermail/lxc-users/2015-November/010516.html
Anyone aware of any new Cockpit module development for LXD?


But we do have a good friend, Martin Pitt, who's working on the
Cockpit team and who's pretty familiar with LXD. He may have some
ideas on what would be involved there and give some pointers to
anyone interested to work on this.


Well I would certainly be prepared to put some time into this because
I've got the beginnings of a prototype PHP + sudo/bash module that
could work but using Cockpit as a foundation would make so much more
sense. However I've only just started to re-evaluate Cockpit so I'm
not too familiar with how it works yet. A little bit of input would
help to get a new cockpit-lxd (?) module to a testing stage for
further feedback from anyone else wanting some kind of web frontend.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD module for Cockpit?

2017-06-14 Thread Mark Constable

I asked this question ~18 months ago...

https://lists.linuxcontainers.org/pipermail/lxc-users/2015-November/010516.html

Anyone aware of any new Cockpit module development for LXD?

As I noted before, any Canonical OpenStack based service is way too
heavy for what I want. Cockpit itself has matured and available as
regular *buntu packages (not via a PPA) and because it's websocket
based and largely written in C, with JS/CSS for the web frontend,
it's very lightweight and performant.

There is a cockpit-machines package that seems to work with libvirt
so maybe it's half way there.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Need help with static IP address -- Simplest use case.

2017-06-09 Thread Mark Constable

On 10/06/17 12:03, Michael Johnson wrote:

Hi Mark. Thanks for responding. I've done exactly what you suggest, and
here is the result from within the container:

ip -4 route show
default via 192.168.0.1 dev eth0  metric 12
192.168.0.0/24 dev eth0  proto kernel  scope link  src 192.168.0.36
[...]


Perhaps you are missing the bridge being part of your profile?

lxc profile list

lxc network attach-profile lxdbr0 default eth0

ie; my default profile for lxd 2.14 shows...

~ lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
nictype: bridged
parent: lxdbr0
type: nic
  root:
path: /
pool: default
type: disk
name: default
used_by:
- /1.0/containers/z1
- /1.0/containers/z2
- /1.0/containers/z3
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Need help with static IP address -- Simplest use case.

2017-06-09 Thread Mark Constable

On 10/06/17 11:40, Michael Johnson wrote:

I'm able to configure a bridge on the host, and the host uses the
bridge just fine. How do I get the container to use the bridge? The
container seems to ignore all interfaces not created by 'lxc network
create'. I'm guessing because iptables gets populated by that
command.


Once you have started a container then simply...

lxc exec CONTAINER_NAME bash

and bring up the eth0 interface however your OS normally does that job.

An example for debian/ubuntu is to edit /etc/network/interfaces and
add normal static IP vales that you would use for any host then restart
the systemd networking system.


Possibly the iptables instance in the container needs some entries
as well but hoping someone can confirm or deny that.

Best to disable any firewall rules until you get the basics working.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Forwarding DNS requests to the host /etc/hosts file

2017-05-31 Thread Mark Constable

On 01/06/17 02:34, Adil Baig wrote:

lxc config device add mycontainer etchosts disk path=/etc/hosts
source=/etc/hosts


1. Is very cool! I tried it and it works.


Yes, a good hint to know about, thanks simplyadilb.


I am more interested in 2. as it seems more future proof when we move
away from simple hosts file. Any suggestions on how to configure an
internal dns. Do I need to start another instance for dnsmasq? Can I
reuse the default? How would the container relay DNS requests to the
host?

What I do on my internal testing LAN is to setup one container with a
real DNS server (pdns + pdns_recursor) with a web frontend and I point
ALL my local computers and containers /etc/resolv.conf to this nameserver
and control all local LAN DNS resolution in one place. Because pdns can
use a MySQL backend I intend to inject entries directly into the domains
and records tables during container setup.

One thing I found is that it's quite feasible to "masquerade" a real
domain with internal LAN IPs so that containers can resolve each other
directly via LAN IPs yet the rest of world sees that domain as pointing
to my external router IP.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] would there be value in starting an LXD community online collection of how-to related information

2017-05-29 Thread Mark Constable

On 30/05/17 10:17, Luis Michael Ibarra wrote:

For now we have discussions, Core dev blogs, github *md files, lxd
wiki, etc. Shouldn't be useful to have an official documentation
channel?

I lean towards an independent option so along those lines this is one
possibly crazy suggestion, FWIW...

- someone register linuxcontainers.wiki (~$30/yr)
- start with a 1GB DigitalOcean droplet
- optionally start a patreon.com project to fund the above
- install Wordpress to easily manage user accounts
- install some plugins like...
  - https://wordpress.org/plugins/yada-wiki/
  - https://wordpress.org/plugins/github-embed/
  - https://wordpress.org/plugins/asgaros-forum/
  - https://wordpress.org/plugins/jetpack-markdown/
  - https://wordpress.org/plugins/wordpress-social-login/

Use WP to mainly manage users and host the plugins but any site pages
can also be easily managed and of course the blog part could be used
for "latest news" and "featured articles". The lightweight forum could
be used for meta discussion and of course the wiki plugin is just that.
The github-embed plugin can provide feedback on various Github projects
and the social-login plugin mostly avoids having to specifically signup
to yet-another-blog-site to get directly involved with the wiki.

I could set all of this up in about 15 minutes but it's a complete waste
of my time unless other folks actually wanted to use it.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Args for lxd init via script

2017-05-26 Thread Mark Constable

Just to complete this thread and kind of mark it [SOLVED] I got
back to getting this script(s) to 99% work after losing my entire
primary BTRFS drive because some typo set my boot partition to
"zfs_volume" (yikes!)

https://raw.githubusercontent.com/netserva/sh/master/bin/setup-lxd

I know to the experts this is probably brainlessly simple but if
I had a working example like this months ago it would have saved
me a man-week of rtfm and testing... and a complete OS reinstall.

Thread summary and solution, lxd init itself accepts most of the
useful arguments so no need to independently add them in a script...

lxd init --auto \
  --network-address 12.34.56.78 \
  --network-port 8443 \
  --storage-backend zfs \
  --storage-create-loop 50 \
  --storage-pool lxd-pool \
  --trust-password changeme
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Mounting a swap file inside a container

2017-05-25 Thread Mark Constable

Is it possible to mount a swap file inside a zfs loop based container?

If so, how would I first disable the host swap inside a container?

I tried this...

lxc profile set medium limits.memory.swap false

which gets me this profile...

~ lxc profile show medium
config:
  limits.cpu: "2"
  limits.memory: 512MB
  limits.memory.swap: "false"
description: ""
devices:
  eth0:
nictype: bridged
parent: lxdbr0
type: nic
  root:
path: /
pool: pool50GB
size: 5GB
type: disk
name: medium
used_by:
- /1.0/containers/c8

but the host swap is still mounted inside the container.

Or, is there a better strategy to provide per-container swap limits?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Args for lxd init via script

2017-05-21 Thread Mark Constable

On 5/22/17 12:28 PM, Fajar A. Nugraha wrote:

Yes but I also want the current disk usage to be available inside
the container so that, for instance, df returns realistic results.


Have you tried lxd with zfs?


Yes, zfs (pool per container) is what I am currently using here...

https://raw.githubusercontent.com/netserva/sh/master/bin/setup-lxd


Did you mean zfs dataset?


Is that the same thing as a "volume" (as seen by lxd) ?

I would normally use btrfs but it can't provide usage statistics.


Using separate POOL per container should be possible in newer lxd
(e.g. in xenial-backports). However that would also negate some
benefits of using zfs, as you'd need to have separate block
device/loopback files for each pool.


That's what I suspect, not the best approach.

Besides, I am pretty sure that the performance penalty of using
loopback instead of a block devices would get worse in large numbers.


Using a default zfs pool, and having separate DATASET (or to be more
accurate, filesystem) per container, is the default setup. Which
would provide correct disk usage statistic (e.g. for "df" and such).


I did try something with "volumes" but I got the impression it did not
seem to provide correct "df" results but I was floundering around just
trying to get anything to work (with df) at the time.


And it's perfectly normal to have several hundred or thousand dataset
(which would include snapshots as well) on a single pool.


And I'd imagine copying and moving containers would make more sense.


IIRC ubuntu's roadmap is to integrate lxd (and zfs) into openstack
(which sould have lots of control panel already).


But unfortunately not in PHP so I can't deploy and extend whatever
control panels openstack offers. As far as I can see, one needs a
minimum of 5 servers just to get started with openstack so that is
180 degrees away from where I am going (super lightweight and simple).


In the mean time, your best bet is probably create your own (possibly
based on lxd-webui).


Thanks for the hint. I'm not interested in developing in Angular but
I should be able to get some frontend ideas. Once I have a basic bash
script setup working then I'll create a PHP plugin for my framework.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Args for lxd init via script

2017-05-21 Thread Mark Constable

On 5/21/17 11:16 PM, gunnar.wagner wrote:

just for my understanding ... you want to monitor disk usage on the
LXD host, right?


Yes but I also want the current disk usage to be available inside the
container so that, for instance, df returns realistic results.

Using a zfs pool per container works just fine for this purpose but I
am concerned that having potentially many 100s of zfs pools per server
may not be very efficient. This sums up what I am after...


For example, PHP's disk_total_space() and disk_free_space()
functions do work accurately with a zfs pool and seeing that I am
working towards a LXD plugin for my hosting control panel I really
need disk limits to work similar to a VPS or Xen VM.

IOW if I supply 5GB of space to a paying client I need to have a way
for both them and myself to easily monitor that disk space. It's the
one thing that has stopped me from using LXD for real. Well that and
not having an open source PHP control panel that runs on Ubuntu servers.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Args for lxd init via script

2017-05-21 Thread Mark Constable

On 5/21/17 4:02 PM, Jeff Kowalczyk wrote:

My question, is it reasonable to provide a separate profile and
zfs pool per container and is there a better or more efficient way
to get the same end result?


Will disk limits work for you?

https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
https://github.com/lxc/lxd/blob/master/doc/containers.md#type-disk


Thanks for the suggestion and links. I'll have to test whether the
"size" limit is actually reflected within the container so that df
and other tools can be used to monitor disk usage.

For example, PHP's disk_total_space() and disk_free_space() functions
do work accurately with a zfs pool and seeing that I am working towards
a LXD plugin for my hosting control panel I really need disk limits to
work similar to a VPS or Xen VM.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Args for lxd init via script

2017-05-20 Thread Mark Constable

On 5/19/17 5:06 PM, Fajar A. Nugraha wrote:

I'm trying to automate a simple setup of LXD via a bash script and
I'm not sure of the best way to provide some preset arguments to "lxd
init", if at all possible. Specifically...


Did you try "lxd --help"?


Sigh, not for the last year or two, thanks Fajar.

Now for an almost related question. I've got a mostly working setup script
for my needs but I am wondering if there is a better way to achieve the
same results...

https://github.com/netserva/sh/blob/master/bin/setup-lxd

I want to provide a definitive amount of disk space per container, visible
within the container, and the only way I can find to do that is to use a
zfs pool per container and the best way to define that pool is to provide
a container specific profile per container.

My question, is it reasonable to provide a separate profile and zfs pool
per container and is there a better or more efficient way to get the same
end result?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Args for lxd init via script

2017-05-19 Thread Mark Constable

I'm trying to automate a simple setup of LXD via a bash script and I'm
not sure of the best way to provide some preset arguments to "lxd init",
if at all possible. Specifically...

Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 5
Would you like LXD to be available over the network (yes/no)? yes
Do you want to configure the LXD bridge (yes/no)? no

I'm hoping someone here has already been down this path, any suggestions?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Add default route to LXD 2.8 bridge

2017-02-06 Thread Mark Constable

I'd like to have LXD manage a host network aware bridge and it almost
works except for needing two extra manual "brctl addif" and "ip route
add default via" steps. Would some variation of this...

lxc network set lxdbr0 ipv4.routes (set default gw)

possibly work for a default route?

Alternatively is there some host side non-LXD systemd or ip-up like
triggers that could run the brctl and ip route commands when LXD brings
up whatever bridge it is managing?

I know I could use the standard networking system to provide a host
bridge and just tell LXD to use it but I'd prefer to let LXD itself
manage this bridge so I can use the LXD API to manage it, especially
on remote hosts.

~ lxc network show lxdbr0
config:
  dns.domain: xx.org
  ipv4.address: 192.168.0.2/24
  ipv4.routes: 192.168.0.0/24
  ipv4.routing: "true"
  ipv6.address: none
name: lxdbr0
type: bridge
usedby:
- /1.0/containers/z1
managed: true
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Increase lxd-loop size

2017-01-15 Thread Mark Constable

Ubuntu zesty btrfs host, xenial zfs containers. How do I increase the size of
the default lxd-loop pool above 10Gb?

lxd-loop/containers/devzfs 2.3G 1.8G 477M 80% 
/var/lib/lxd/containers/dev.zfs
lxd-loop/containers/docker zfs 7.4G 7.0G 477M 94% 
/var/lib/lxd/containers/docker.zfs

I've tried this below per container but it doesn't make any difference...

lxc config device set docker root size 20GB

root@docker ~/mailcow-dockerized docker-compose up -d
[...]
ERROR: failed to register layer: Error processing tar file(exit status 1):
 write /usr/share/locale/id/LC_MESSAGES/bash.mo: no space left on device
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Looking for LXD-2.4.1 Static IP Setup Documentation

2016-10-18 Thread Mark Constable
On 19/10/16 07:10, David Favor wrote:
> I'd prefer the "tweak each container config" approach.
> Be great if someone could provide a URL for an example.

Very hard to impossible to find examples of using lxc config to set
the IP of each container and I'm still not sure it's even possible.

Can anyone confirm this, and if so, provide an example?

ATM what I do is either use dnsmasq to provide DHCP or stop the
container and push a preconfigured interfaces file...

lxc file push interfaces.tmp CONTAINER/etc/network/interfaces

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Looking for LXD-2.4.1 Static IP Setup Documentation

2016-10-18 Thread Mark Constable
On 19/10/16 05:02, David Favor wrote:
> Looking for best way to change 10.87.167.115 to 144.217.33.224 
> (static/public IP). Prefer doing this in a way where communication
> between host/container + container/container works without adding
> iptables rules, which become complex to manage with many containers.

The simplest way I am aware of is to set up your own bridge on a public
IP range that you want to allocate and provide your own DHCP server that
you have control over, or use lxc config to tweak the config of each
container to provide it with a static IP as you go.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] inventory / dashboard tool to manage LXD containers' lifecycle

2016-09-05 Thread Mark Constable
On 05/09/16 19:38, Nicola Volpini wrote:
> As per subject: is there any existing project able to manage the 
> lifecycle of LXD containers via some form of frontend/webgui?
> [...] 
> Anyone out there who managed to integrate these tools and LXD? I
> would be cool to do for LXD what has been done for VMs.

Absolutely a huge ++1 from me.

For a systemd based OS, this is in the ballpark if the LXD API was
added to the Docker smarts...

https://github.com/cockpit-project/cockpit

If Cockpit was a Go based project I would have got more involved.
If there is nothing more advanced then this could make for a good
starting point, FWIW...

https://github.com/lxc/lxd-demo-server

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] btrfs and lxc init

2016-07-08 Thread Mark Constable
The only 2 options for lxc init are dir and zfs.

Why no btrfs option?

Or, if I chose either one will the snapshot and send/receive advantages
of btrfs still be taken advantage of? If so, which one should I chose?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Container scaling - LXD 2.0

2016-05-08 Thread Mark Constable

On 09/05/16 10:18, Ronald Kelley wrote:

I am trying to get some data points on how many containers we can
run on a single host if all the containers run the same applications
(eg: Wordpress w/nginx, php-fpm, mysql). We have a number of LXD 2.0
servers running on Ubuntu 16.04 - each server has 5G RAM, 20G Swap,
and 4 CPUs.


I want to do almost exactly this too, plus owncloud.


I have read about Kernel Samepage Memory (KSM), and it seems to be
included in the Ubuntu 16.40 kernel. So, in theory, we can over
provision our containers by using KSM.


Excellent point, I am also keen to know if KSM can help with this kind
of setup. My thoughts had been to run a single instance of MySQL (or
Mariadb) in it's own container because a couple of dozen instances of
MySQL (with Innodb) takes up at least 100Mb each so running a single
instance of MySQL would definitely help but if KSM is a real thing
with containers then it may not be necessary to consider such
workarounds.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Wildcard in lxd commands?

2016-03-19 Thread Mark Constable

On 17/03/16 23:01, Janne Savikko wrote:

You can not use filters to list running or stopped containers. Lxc
start or stop do not support filters, only container name (or names).
You though can always pipe commands if you want to stop dozens of
containers whose names begin with "web" (note! lxc list keyword
filter compares from the start of the name, so "lxc list eb" does not
work in this case):


   $ lxc list web|grep RUNNING|awk '{ print $2 }'|xargs lxc stop


It's still rather awkward to reliably script a start/stop of a single
container that happens to be called "web" when there might be web1,
web2 etc. An explicit non-filtered arg to lxc list with optional regex
would be more useful. Plus an option to have plain non-tablewriter
output for easier script parsing.

[[ `lxc list -cs web` = RUNNING ]]; echo $?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [Solved]RE: Networking LXD containers

2016-03-14 Thread Mark Constable

On 15/03/16 13:03, efersept wrote:

Thank you Fajar, I have tried putting entries in
/etc/network/interfaces on an Ubuntu host but they are completely
ignored. Well that is not completely true, static IPs can be set for
eth0 but bridge entries and wlan0 entries are ignored The only
success I have had is creating bridge interface files using
systemd-networkd in /etc/systemd/network. I don't really understand
why.


FWIW I use a kubuntu xenial host and set USE_LXC_BRIDGE="false" in
/etc/default/lxc-net and use NetworkManager to configure my own lxcbr0
bridge (so I can easily switch to wifi). I also disable the NM invoked
dnsmasq and manage all my local containers and other local hosts via a
single /etc/dnsmasq.conf file...

~cat /etc/NetworkManager/NetworkManager.conf
[main]
plugins=ifupdown,keyfile,ofono
#dns=dnsmasq

[ifupdown]
managed=false

I've had trouble trying to use NM to configure a bridge but one time it
did work so I kept a copy of the system-connections file so now I just
copy this config file below to any new desktop system...

~ cat /etc/NetworkManager/system-connections/lxcbr0
[connection]
id=lxcbr0
uuid=a557cbf5-2f45-4958-a4bf-bfd1c00ce6c3
type=bridge
interface-name=lxcbr0
permissions=user:markc:;
secondaries=

[ipv4]
address1=192.168.0.2/24,192.168.0.1
dns=192.168.0.2;
dns-search=goldcoast.org;
method=manual

[ipv6]
dns-search=
method=auto

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Clean output for lxc list

2016-03-10 Thread Mark Constable

On 10/03/16 18:54, Stéphane Graber wrote:

We've had a few folks ask for a --format option of some sort which
would allow them to get the info in csv, tabular or json/yaml
format.


One simple approach could be that if one used the default lxc list
with anything other than -c (or --columns) then the current ascii
border eye-candy remains intact but as soon as -c is used then the
ascii border (and header) is removed and the vertical bars replaced
with tabs. The logic being that if someone uses -c then they know
what they are looking for and do not need the extra visual support.

ie;...

~ lxc list gc3
+--+-++--++---+
| NAME |  STATE  |IPV4| IPV6 |TYPE| SNAPSHOTS |
+--+-++--++---+
| gc3  | RUNNING | 192.168.0.3 (eth0) |  | PERSISTENT | 0 |
+--+-++--++---+

~ lxc list gc3 -c ns46tS
gc3RUNNING192.168.0.3 (eth0)PERSISTENT0

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Clean output for lxc list

2016-03-09 Thread Mark Constable

I'm not sure if this is already possible but a suggestion for lxc list
would be to provide a "clean" output option without ascii borders. Using
mysql as an example it would be neat if something like this was possible...

[[ $(lxc list $HOST -BN -cs) = RUNNING ]] && echo yay || echo nay

~ mysql -e "show databases"
++
| Database   |
++
| information_schema |
| mysql  |
++

~ mysql -Be "show databases"
Database
information_schema
mysql

~ mysql -BNe "show databases"
information_schema
mysql
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Docs for all unpriv container limits?

2016-03-08 Thread Mark Constable

On 09/03/16 17:01, Stéphane Graber wrote:

Where do I find the kernel boot parameters to enable memory and swap limits?


swapaccount=1
https://www.kernel.org/doc/Documentation/kernel-parameters.txt


Thanks yet again. Now I have something to google for would you mind confirming
this one is or is not needed now?

cgroup_enable=memory

I'm sure I used that a year ago but kernel-parameters.txt does not mention the
"memory" arg...

cgroup.memory=  [KNL] Pass options to the cgroup memory controller.
Format: 
nosocket -- Disable socket memory accounting.
nokmem -- Disable kernel memory accounting.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Can't start unpriv container (large log dump)

2016-03-08 Thread Mark Constable

I've done this 100s of times before but for some reason I'm getting an
error trying to start an unpriv container. Any clues?

Xenial LXD 2.0.0~rc2-0ubuntu2 w/ btrfs

~ lxc image copy upstream:/ubuntu/xenial/amd64 local: --alias=xenial
Image copied successfully!

~ lxc image list
++--++++-+-+
| ALIAS  | FINGERPRINT  | PUBLIC |  DESCRIPTION   |  
ARCH  |  SIZE   | UPLOAD DATE |
++--++++-+-+
| xenial | 0594dbb54ade | no | Ubuntu xenial (amd64) (20160218_03:49) | 
x86_64 | 64.87MB | Mar 9, 2016 at 6:12am (UTC) |
++--++++-+-+

~ lxc launch xenial gc3
Creating gc3
Starting gc3
error: Error calling 'lxd forkstart gc3 /var/lib/lxd/containers 
/var/log/lxd/gc3/lxc.conf': err='exit status 1'
Try `lxc info --show-log gc3` for more info

~ cat /var/log/lxd/gc3/lxc.conf
lxc.cap.drop = mac_admin mac_override sys_time sys_module
lxc.mount.auto = proc:rw sys:rw
lxc.autodev = 1
lxc.pts = 1024
lxc.mount.entry = mqueue dev/mqueue mqueue rw,relatime,create=dir,optional
lxc.mount.entry = /proc/sys/fs/binfmt_misc proc/sys/fs/binfmt_misc none 
rbind,optional
lxc.mount.entry = /sys/firmware/efi/efivars sys/firmware/efi/efivars none 
rbind,optional
lxc.mount.entry = /sys/fs/fuse/connections sys/fs/fuse/connections none 
rbind,optional
lxc.mount.entry = /sys/fs/pstore sys/fs/pstore none rbind,optional
lxc.mount.entry = /sys/kernel/debug sys/kernel/debug none rbind,optional
lxc.mount.entry = /sys/kernel/security sys/kernel/security none rbind,optional
lxc.include = /usr/share/lxc/config/common.conf.d/
lxc.logfile = /var/log/lxd/gc3/lxc.log
lxc.loglevel = 0
lxc.arch = linux64
lxc.hook.pre-start = /usr/bin/lxd callhook /var/lib/lxd 2 start
lxc.hook.post-stop = /usr/bin/lxd callhook /var/lib/lxd 2 stop
lxc.tty = 0
lxc.utsname = gc3
lxc.mount.entry = /var/lib/lxd/devlxd dev/lxd none bind,create=dir 0 0
lxc.aa_profile = lxd-gc3_
lxc.seccomp = /var/lib/lxd/security/seccomp/gc3
lxc.id_map = u 0 231072 65536
lxc.id_map = g 0 231072 65536
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:d8:96:db
lxc.network.name = eth0
lxc.rootfs = /var/lib/lxd/containers/gc3/rootfs
lxc.mount.entry = /var/lib/lxd/shmounts/gc3 dev/.lxd-mounts none 
bind,create=dir 0 0

~ lxc info --show-log gc3
Name: gc3
Architecture: x86_64
Created: 2016/03/09 06:13 UTC
Status: Stopped
Type: persistent
Profiles: default

Log:

lxc 20160309161350.446 INFO lxc_confile - 
confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 231072 range 
65536
lxc 20160309161350.446 INFO lxc_confile - 
confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 231072 range 
65536
lxc 20160309161350.874 INFO lxc_confile - 
confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 231072 range 
65536
lxc 20160309161350.874 INFO lxc_confile - 
confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 231072 range 
65536
lxc 20160309161350.897 INFO lxc_start - 
start.c:lxc_check_inherited:251 - closed inherited fd 3
lxc 20160309161350.897 INFO lxc_start - 
start.c:lxc_check_inherited:251 - closed inherited fd 9
lxc 20160309161350.898 INFO lxc_container - 
lxccontainer.c:do_lxcapi_start:797 - Attempting to set proc title to [lxc 
monitor] /var/lib/lxd/containers gc3
lxc 20160309161350.898 INFO lxc_start - 
start.c:lxc_check_inherited:251 - closed inherited fd 9
lxc 20160309161350.898 INFO lxc_lsm - lsm/lsm.c:lsm_init:48 - 
LSM security driver AppArmor
lxc 20160309161350.898 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .reject_force_umount  # comment 
this to allow umount -f;  not recommended.
lxc 20160309161350.898 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for reject_force_umount 
action 0
lxc 20160309161350.898 INFO lxc_seccomp - 
seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts

lxc 20160309161350.898 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for reject_force_umount 
action 0
lxc 20160309161350.898 INFO lxc_seccomp - 
seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts

lxc 20160309161350.898 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .[all].
lxc 20160309161350.898 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .kexec_load errno 1.
lxc 20160309161350.898 INFO lxc_seccomp - 

Re: [lxc-users] Docs for all unpriv container limits?

2016-03-08 Thread Mark Constable

On 09/03/16 16:14, Stéphane Graber wrote:

Where do I find the documentation for all possible memory/swap/whatever
limits that can be applied to LXD 2.0.0~rc2-0ubuntu2 unpriv containers?

And boot parameters etc.


https://github.com/lxc/lxd/blob/master/specs/configuration.md


Thanks.

Where do I find the kernel boot parameters to enable memory and swap limits?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Docs for all unpriv container limits?

2016-03-08 Thread Mark Constable

Where do I find the documentation for all possible memory/swap/whatever
limits that can be applied to LXD 2.0.0~rc2-0ubuntu2 unpriv containers?

And boot parameters etc.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc / lxd I'm lost somewhere

2016-03-01 Thread Mark Constable

On 02/03/16 01:34, Benoit GEORGELIN - Association Web4all wrote:

User A will have his own space for containers
User B will have his own space for containers

They should do "lxc-ls -f" or "lxc list"  and see only their own containers

Maybe this is not a typical use case ?


I think the best way to achieve this level of user isolation would be to
use nested containers so that each user is assigned to and logged into
a "parent" container and then they have full control of and can only view
their own (nested) containers. I'm not sure how well containers within
containers is supported these days but it does work to some degree.

As for a LXD version of this...

lxc-create -n test -t ubuntu -B lvm --lvname test --vgname vg_node1 --fstype 
ext4 --fssize 1GB

it could be as simple as...

lxc launch unbuntu test

where extra settings may need a custom profile according to...

https://github.com/lxc/lxd/blob/master/specs/configuration.md

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] setcap capabilities

2016-02-28 Thread Mark Constable

FWIW another package that requires setcap. This is the first one I've seen
that falls back to setuid OOTB.

Setting up mtr-tiny (0.86-1) ...
Failed to set capabilities on file `/usr/bin/mtr' (Invalid argument)
The value of the capability argument is not permitted for a file. Or the file 
is not a regular (non-symlink) file
Setcap failed on /usr/bin/mtr, falling back to setuid

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] setcap does not work in unprivileged container

2016-02-25 Thread Mark Constable

On 26/02/16 05:56, Serge Hallyn wrote:

Hopefully in the next month or two I'll get time to get that
working in the kernel.  Which means a few more months before
it'd be really available.


Can we expect it to be backported to Xenial?


No, but the HWE and such kernels will have it.  They are just as well
(really, better) supported so that should be no big deal.


With todays kernel 4.4.0-8 update my xenial containers are up running again,
many thanks, but for the record this package also soft-breaks because of
the setcap issue. Good to hear you will be looking into it as I was under
the impression it was never going to happen.

Setting up systemd (229-1ubuntu4) ...
addgroup: The group `systemd-journal' already exists as a system group. Exiting.
Failed to set capabilities on file `/usr/bin/systemd-detect-virt' (Invalid 
argument)
The value of the capability argument is not permitted for a file. Or the file 
is not a regular (non-symlink) file

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] setcap does not work in unprivileged container

2016-02-25 Thread Mark Constable

On 25/02/16 20:16, Tamas Papp wrote:

# /sbin/setcap 'cap_net_bind_service=+ep' /usr/bin/nodejs
Failed to set capabilities on file `/usr/bin/nodejs' (Invalid argument)
The value of the capability argument is not permitted for a file. Or the file 
is not a regular (non-symlink) file

Can we somehow make it work?


The answer seems to be "you can't", sorry.

This is the answer I got to basically the same question a week ago...


On 19/02/16 02:32, Serge Hallyn wrote:

~ /sbin/setcap cap_net_bind_service=+ep /usr/bin/caddy
Failed to set capabilities on file `/usr/bin/caddy' (Invalid argument)


xenial host with a xenial lxd 2.0.0~beta2 unprivileged container


lxd 2.0.0~beta3 now. Can you spare a moment for a little more detail please?


Sorry apparently I was not clear.  If you are in an unprivileged
container, there is nothing you can do to set file capabilities, apart
from writing the kernel patch (and libcap patch) to make namespaaced
capabilities happen.

However any packages in ubuntu should not break due to not being able
to set file capabilities.  I want the namespaced capabilties so we can
stop having fallbacks, but right now if that happens then it is valid
to file a bug against the package which is failing to install.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] /sbin/init failure

2016-02-24 Thread Mark Constable

Using latest freshly reinstalled xenial host and containers with 
2.0.0~beta4-0ubuntu4
which got the packages installed after removing everything and starting again
but I've had this problem for a couple of weeks now...

~ lxc image list
++--++++-+--+
| ALIAS  | FINGERPRINT  | PUBLIC |  DESCRIPTION   |  
ARCH  |  SIZE   | UPLOAD DATE  |
++--++++-+--+
| xenial | 0594dbb54ade | no | Ubuntu xenial (amd64) (20160218_03:49) | 
x86_64 | 64.87MB | Feb 25, 2016 at 1:45am (UTC) |
++--++++-+--+

~ lxc image copy upstream:/ubuntu/xenial/amd64 local: --alias=xenial

~ lxc launch xenial gc3
Creating gc3
Starting gc3

~ lx
+--+-+--+--++---+
| NAME |  STATE  | IPV4 | IPV6 |TYPE| SNAPSHOTS |
+--+-+--+--++---+
| gc3  | RUNNING |  |  | PERSISTENT | 0 |
+--+-+--+--++---+

~ lxc exec gc3 bash
root@gc3:~# ps aux
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root 1  0.0  0.0  36544  3952 ?Ss   03:27   0:00 /sbin/init
root25  0.0  0.0  18220  3160 ?Ss   03:29   0:00 bash
root36  0.0  0.0  34412  2804 ?R+   03:29   0:00 ps aux
root@gc3:~# exit

And of course I have to use "lxc stop gc3 --force" to get it to stop.

Any idea why /sbin/init is not kicking off the rest of the startup procedure?



~ ps auxxww | grep -v grep | grep lx
root  9621  0.0  0.0 234760  3744 ?Ssl  11:41   0:00 /usr/bin/lxcfs 
/var/lib/lxcfs/
root 10264  0.0  0.1 960888 25700 ?Ssl  11:41   0:05 /usr/bin/lxd 
--group lxd --logfile=/var/log/lxd/lxd.log
root 11567  0.0  0.0  7  4552 ?Ss   13:38   0:00 [lxc monitor] 
/var/lib/lxd/containers gc3

~ tail /var/log/lxd/lxd.log
t=2016-02-25T13:38:42+1000 lvl=info msg=handling method=GET url=/1.0 ip=@
t=2016-02-25T13:38:42+1000 lvl=info msg=handling method=GET 
url=/1.0/containers/gc3 ip=@
t=2016-02-25T13:38:42+1000 lvl=info msg=handling url=/1.0/containers/gc3/state 
ip=@ method=GET
t=2016-02-25T13:38:42+1000 lvl=info msg=handling 
url="/1.0/containers/gc3/snapshots?recursion=1" ip=@ method=GET
t=2016-02-25T13:38:42+1000 lvl=info msg=handling method=GET 
url=/1.0/containers/gc3/logs/lxc.log ip=@
t=2016-02-25T13:39:21+1000 lvl=info msg=handling method=GET url=/1.0 ip=@
t=2016-02-25T13:39:21+1000 lvl=info msg=handling ip=@ method=GET 
url=/1.0/containers/gc3
t=2016-02-25T13:39:21+1000 lvl=info msg=handling method=GET 
url=/1.0/containers/gc3/state ip=@
t=2016-02-25T13:39:21+1000 lvl=info msg=handling method=GET 
url="/1.0/containers/gc3/snapshots?recursion=1" ip=@
t=2016-02-25T13:39:21+1000 lvl=info msg=handling method=GET 
url=/1.0/containers/gc3/logs/lxc.log ip=@

~ lxc info gc3 --show-log
Name: gc3
Architecture: x86_64
Created: 2016/02/25 03:27 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 11586
Processes: 1
Ips:
  lo:   inet127.0.0.1
  lo:   inet6   ::1

Log:

lxc 20160225132740.709 INFO lxc_confile - 
confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 231072 range 
65536
lxc 20160225132740.709 INFO lxc_confile - 
confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 231072 range 
65536
lxc 20160225132740.730 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1113 - Not attaching to cgroup 
name=systemd unknown to /var/lib/lxd/containers gc3
lxc 20160225132740.730 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1113 - Not attaching to cgroup cpu unknown 
to /var/lib/lxd/containers gc3
lxc 20160225132740.730 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1113 - Not attaching to cgroup devices 
unknown to /var/lib/lxd/containers gc3
lxc 20160225132740.730 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1113 - Not attaching to cgroup net_cls 
unknown to /var/lib/lxd/containers gc3
lxc 20160225132740.730 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1113 - Not attaching to cgroup memory 
unknown to /var/lib/lxd/containers gc3
lxc 20160225132740.730 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1113 - Not attaching to cgroup hugetlb 
unknown to /var/lib/lxd/containers gc3
lxc 20160225132740.730 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1113 - Not attaching to cgroup perf_event 
unknown to /var/lib/lxd/containers gc3
lxc 20160225132740.730 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1113 - Not attaching to cgroup cpuset 
unknown to 

Re: [lxc-users] setcap capabilities

2016-02-18 Thread Mark Constable

On 19/02/16 12:21, Serge Hallyn wrote:

Unpacking systemd (229-1ubuntu2) over (228-5ubuntu3) ...
dpkg: error processing archive 
/var/cache/apt/archives/systemd_229-1ubuntu2_amd64.deb (--unpack):
  unable to make backup link of './bin/systemctl' before installing new 
version: Operation not permitted

[...]
What does ls -l /bin/systemctl show?


~ ls -l /bin/systemctl
-rwxr-xr-x 1 root root 659848 Feb 14 22:41 /bin/systemctl

I did an "echo 0 > /proc/sys/fs/protected_hardlinks" on the host and
reran the update which proceeded and installed the rest of the package
updates but along the way I got this...

Failed to set capabilities on file `/usr/bin/systemd-detect-virt' (Invalid 
argument)
The value of the capability argument is not permitted for a file. Or the file 
is not a regular (non-symlink) file

~ ls -l /usr/bin/systemd-detect-virt
-rwxr-xr-x 1 root root 35248 Feb 14 22:41 /usr/bin/systemd-detect-virt

~ lsattr /usr/bin/systemd-detect-virt
 /usr/bin/systemd-detect-virt

~ getcap -v /usr/bin/systemd-detect-virt
/usr/bin/systemd-detect-virt


Whereas on the xenial host I get...

~ getcap -v /usr/bin/systemd-detect-virt
/usr/bin/systemd-detect-virt = cap_dac_override,cap_sys_ptrace+ep


So is no one else reporting this problem when upgrading to systemd_229-1ubuntu2?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] setcap capabilities

2016-02-18 Thread Mark Constable

On 19/02/16 02:32, Serge Hallyn wrote:

# for containers to allow suid exec
echo 0 > /proc/sys/fs/protected_hardlinks

on the host but that is going to be awkward for folks who do not happen
to know this "trick" meaning generally trying to install the courier-mta
package on unpriv containers is going to fail in an ugly way that messes
up package install/upgrades.

Any comment on how to make this easier to deal with?


I'm afraid not.  It's the exact case which the authors of the
protected_hardlinks mechanism wanted to protect against...


Thanks for the response Serge but this "problem" all but makes unpriv
containers (xenial at least) unusable. Todays example...

Unpacking systemd (229-1ubuntu2) over (228-5ubuntu3) ...
dpkg: error processing archive 
/var/cache/apt/archives/systemd_229-1ubuntu2_amd64.deb (--unpack):
 unable to make backup link of './bin/systemctl' before installing new version: 
Operation not permitted
dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)
addgroup: The group `systemd-journal' already exists as a system group. Exiting.
Failed to set capabilities on file `/usr/bin/systemd-detect-virt' (Invalid 
argument)
The value of the capability argument is not permitted for a file. Or the file 
is not a regular (non-symlink) file
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for dbus (1.10.6-1ubuntu1) ...
Errors were encountered while processing:
 /var/cache/apt/archives/systemd_229-1ubuntu2_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] setcap capabilities

2016-02-13 Thread Mark Constable

Outside a container on the host I can...

~ /sbin/setcap cap_net_bind_service=+ep /usr/bin/caddy
~ getcap /usr/bin/caddy
/usr/bin/caddy = cap_net_bind_service+ep

but inside a container I get...

~ /sbin/setcap cap_net_bind_service=+ep /usr/bin/caddy
Failed to set capabilities on file `/usr/bin/caddy' (Invalid argument)
The value of the capability argument is not permitted for a file. Or the file 
is not a regular (non-symlink) file

What procedure should I follow to allow the above cap_net_bind_service=+ep to be
set inside a 2.0.0~beta1 container?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] setcap capabilities

2016-02-13 Thread Mark Constable

On 14/02/16 03:20, Serge Hallyn wrote:

but inside a container I get...

~ /sbin/setcap cap_net_bind_service=+ep /usr/bin/caddy
Failed to set capabilities on file `/usr/bin/caddy' (Invalid argument)


If not in a user namespace, ... well it works for me, but you may
have to edit the files under /usr/share/lxc which get lxc.include'd
to make sure they're not dropping CAP_SETFCAP, and check your
apparmor/selinux policy. I'm not going more into detail on that until
we're sure you're not in a user namespace :)


Woops, sorry, xenial host with a xenial lxd 2.0.0~beta2 unprivileged
container.

***

This could be a separate list post but it may involve a similar solution.

The above is my main concern but this is also related and that is when
I install certain packages (like courier-mta) it requires some suid
capabilities to suid some symlink'd binaries and was failing inside
this same unpriv container. The solution in this case was to set...

# for containers to allow suid exec
echo 0 > /proc/sys/fs/protected_hardlinks

on the host but that is going to be awkward for folks who do not happen
to know this "trick" meaning generally trying to install the courier-mta
package on unpriv containers is going to fail in an ugly way that messes
up package install/upgrades.

Any comment on how to make this easier to deal with?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] make a nginx webserver in a lxc container available to the local wireless lan

2015-12-09 Thread Mark Constable

On 09/12/15 21:41, Eldon Kuzhyelil wrote:

auto lo
iface lo inet loopback


You may be missing...

auto eth0
iface eth0 inet manual


auto br0
iface br0 inet static
[...]


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] make a nginx webserver in a lxc container available to the local wireless lan

2015-12-08 Thread Mark Constable

On 08/12/15 18:07, Eldon Kuzhyelil wrote:

Okay i am now doing doing it with ethernet. So basically i am trying to
 setup a webserver in my lxc container and my system is connected to router
 via ethernet cable.I want the web page to be visible from my another system
 connected to this LAN. What i have done right now is this

vi /etc/network/interfaces

auto lo
iface lo inet loopback


Slightly altered from the link I posted before...

auto eth0
iface eth0 inet manual

auto lxcbr0
iface lxcbr0 inet static
   address 192.168.0.2
   netmask 255.255.255.0
   gateway 192.168.0.1
   dns-nameserver 192.168.0.2
   bridge_ports eth0

And if you have multiple bridges on your LAN network then add...

   bridge_stp on

The above presumes you have disabled lxc starting it's own lxcbr0 bridge, and
NetworkManager, on your laptop and running a dedicated dnsmasq on 192.168.0.2
to provide DNS and DHCP for your containers (which may also involve disabling
the DHCP server on your router). With this approach you can use a single
/etc/dnsmasq.conf config file to control both DNS and DHCP assignments for
your entire local LAN network including containers on any LAN server (as long
as 192.168.0.2 is visible) plus a bonus external DNS cache for the entire LAN
if every machine or container uses 192.168.0.2 as its nameserver.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] make a nginx webserver in a lxc container available to the local wireless lan

2015-12-07 Thread Mark Constable

On 07/12/15 15:41, Eldon Kuzhyelil wrote:

Mr.Tamas if u have any documents for the options you have mentioned please
 provide them as i am new to this domain.


If you are testing LXD on a laptop then keep in mind you can't use bridging via
WIFI so if you can use an ethernet cable from your main router then there may be
some useful hints here...

https://lists.linuxcontainers.org/pipermail/lxc-users/2015-November/010495.html

If you can surf to https://goldcoast.org then that is a xenial container on my
xenial/kubuntu laptop running nginx and php7 on an internal 192.168.0.3 IP as
a DMZ from my domestic router/modem.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Cockpit-like frontend for LXD?

2015-11-21 Thread Mark Constable

Is anyone aware of any kind of systemd-smart/websocket based web frontend for
the lxd daemon similar in scope to http://cockpit-project.org/ ?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxcbr0?

2015-11-18 Thread Mark Constable

On 18/11/15 08:22, Robert Koretsky wrote:

I have successfully installed and created/started LXC containers on Ubuntu 
15.10,
 but cannot get them to be visible on my home network. I do an ifconfig on both
 the host and in a container, and see the IPv4 address of lxcbr0 as 10.0.3.1,
 but after reading many references cannot figure out how to get my router to
 assign the container an address, like 192.168.0.20 say.


There are a number of ways to do this. This is my approach where my router/modem
is at 192.168.0.1...

. edit /etc/default/lxc-net  and set USE_LXC_BRIDGE="false"
. edit /etc/network/interfaces and add...

auto enp4s0f1 # or eth0
iface enp4s0f1 inet manual
auto lxcbr0
iface lxcbr0 inet static
  address 192.168.0.2
  netmask 255.255.255.0
  gateway 192.168.0.1
  dns-nameserver 192.168.0.2
  dns-search example.org
  bridge_ports enp4s0f1
  bridge_stp on
  bridge_fd 1
  bridge_maxwait 1
#auto wlp3s0 # or wlan0
#iface wlp3s0 inet static
#  address 192.168.0.2
#  netmask 255.255.255.0
#  broadcast 192.168.0.255
#  gateway 192.168.0.1
#  dns-nameserver 192.168.0.2
#  dns-search example.org
#  wpa-ssid "MY AP"
#  wpa-psk 

. you may need to "systemctl enable networking" and disable NetworkManager
. edit /etc/dnsmasq.conf as something like this...

domain-needed
bogus-priv
no-resolv
no-hosts
expand-hosts
cache-size=1
local-ttl=120
log-async=10
dns-loop-detect
dhcp-authoritative
log-queries
log-dhcp
bind-interfaces
server=8.8.8.8
server=8.8.4.4

# switch between wifi and lan-bridge
interface=lxcbr0
server=192.168.0.2@lxcbr0
#interface=wlp3s0
#server=192.168.0.2@wlp3s0

dhcp-option=option:router,192.168.0.1
addn-hosts=/etc/addn-hosts
domain=example.org
server=/mbox.goldcoast.org/192.168.0.2
server=/0.168.192.in-addr.arpa/192.168.0.2
local=/goldcoast.org/
address=/my.example.org/192.168.0.2
address=/lxc1.example.org/192.168.0.3
address=/lxc2.example.org/192.168.0.4
address=/lxc3.example.org/192.168.0.5
mx-host=example.org,lxc1.example.org,10
dhcp-range=192.168.0.3,192.168.0.99,255.255.255.0,12h
dhcp-host=lxc1,192.168.0.3,12h
dhcp-host=lxc2,192.168.0.4,12h
dhcp-host=lxc3,192.168.0.5,12h

So that you can use this particular dnsmasq setup for wifi-only access (which
can't be bridged unfortunately) where it's great for local DNS caching aside
from being used as the DHCP server for containers.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Dotted container names now invalid?

2015-10-07 Thread Mark Constable

On 07/10/15 18:53, Stéphane Graber wrote:

Is there any chance this restriction could be loosened slightly to include
a dot char to re-enable a FQDN for container names?


Not all operating systems we may run on at some point support dots in
their hostnames, so allowing this would make things inconsistent across
platforms and I'd really rather avoid this.


Well how about lxc itself transform any dots to hyphens on just those systems?

Rather than require the majority of sane posix platforms to manually transform
a dotted FQDN to some hyphenated version why not just apply this "magically"
within lxc for the systems that can't cope with a dotted hostname?


Note that if what you need is a way to find containers easily, you can do:
 lxc config set container-name user.customer=blah
 lxc list user.customer=blah


Times thousand of hosts compared to just a plain "lxc list"?

On 07/10/15 17:37, Fiedler Roman wrote:

Having the same problem, I was just replacing the dots with dashes, which
 does not really impair readability. So you do not need a lookup map.


I've got at least half a dozen clients with hyphens in their domainnames
so mapping - to dot will result in an invalid domain.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Dotted container names now invalid?

2015-10-05 Thread Mark Constable

lxc v0.19 on Ubuntu 15.10 host.

~ lxc launch wily abc
Creating abc done.
Starting abc done.

~ lxc launch wily abc.lxc
Creating abc.lxc error: Invalid container name

The 2nd one above used to work.

Why are dotted domain-like container names now invalid?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Configuring LXC containers to use a host bridge under Ubuntu Wily

2015-08-29 Thread Mark Constable

So not to hijack Peter Steeles CentOS thread I'd like to ask a similar question
about the best way to tweak either the LXC network settings or
/etc/network/interfaces to provide the missing pieces for non-NAT bridging.

I modify lxc-net to bring up a bridge using my native internal LAN network 
range
and this starts up just fine when the lxd deamon is started by systemd but I 
have
to manually use /etc/rc.local to bind the lxcbr0 bridge to enp4s0f1 (or eth0).

~ egrep -v ^(#|$) /etc/default/lxc-net
USE_LXC_BRIDGE=true
LXC_BRIDGE=lxcbr0
LXC_ADDR=192.168.0.2
LXC_NETMASK=255.255.255.0
LXC_NETWORK=192.168.0.0/24
LXC_DHCP_RANGE=192.168.0.3,192.168.0.99
LXC_DHCP_MAX=96
LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf
LXC_DOMAIN=lxc

~ egrep -v ^(#|$) /etc/rc.local
ifconfig enp4s0f1 up
brctl addif lxcbr0 enp4s0f1
route add default gw 192.168.0.1
echo 
nameserver 192.168.0.2
search lxc  /etc/resolv.conf

The above has been working for me for 6 months but it's tacky and I'd like to
complete these bridging steps properly. My attempts with 
/etc/network/interfaces
fail because the bridge is already up via either /etc/network/interfaces or 
lxc-net.

- is it possible to do these /etc/rc.local steps via lxc-net?

- if not could anyone offer a hint for the correct way to do the above via the
  /etc/network/interfaces file?

I hope to start using LXD for real with public IPs when wily is released and the
above rc.local workaround is not something I'd like to recommend in some blog 
post
tutorials I intend to write.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-29 Thread Mark Constable

On 29/08/15 23:54, Peter Steele wrote:

For example, I see references to the file /etc/network/interfaces. Is this an
 LXC thing or is this a standard file in Ubuntu networking?


It's a standard pre-systemd debian/ubuntu network config file.
 

Mark Constable asked a related question stemming from my original post and
 commented on the file /etc/default/lxc-net. I assume this file is *not* 
specific
 to Ubuntu.


Aside from some ubuntu specific apparmor etc files these are what the ubuntu lxc
package installs (confusingly the lxd-client package install the lxc 
command)...

/etc/bash_completion.d/lxc
/etc/default/lxc
/etc/dnsmasq.d-available/lxc
/etc/init/lxc.conf
/etc/init/lxc-instance.conf
/etc/init/lxc-net.conf
/etc/lxc/default.conf
/lib/systemd/system/lxc-net.service
/lib/systemd/system/lxc.service

~ cat /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

Eek, lxc-net does not seem to be part of a package so I'm not sure how I got 
that
file!

~ dpkg -S /etc/default/lxc-net
dpkg-query: no path found matching pattern /etc/default/lxc-net

However this config file refers to it so maybe I copied it from some 
howto/tutorial...

~ egrep -v ^(#|$) /etc/default/lxc
LXC_AUTO=true
USE_LXC_BRIDGE=false  # overridden in lxc-net
[ -f /etc/default/lxc-net ]  . /etc/default/lxc-net
LXC_SHUTDOWN_TIMEOUT=120

FWIW I only use the lxc command for unpriv containers via the lxd daemon as of 
the
last 4 or 5 months and, like you I think, have no interest in the default NAT'd
10.0.3.* lxcbr0 network. My main test honeypot container on my laptop is at
https://goldcoast.org. It and ma...@goldcoast.org seem to work most of the time.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd 0.16 lxcbr0 not starting

2015-08-23 Thread Mark Constable

To continue from the previous post, I just noticed this...

Aug 23 16:50:04 mbox audit[2550]: AVC apparmor=DENIED operation=mount info=failed type match error=-13 
profile=lxc-container-default name=/sys/fs/cgroup/ pid=2550 comm=systemd flags=ro, nosuid, nodev, noexec, 
remount, strictatime

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd 0.16 lxcbr0 not starting

2015-08-23 Thread Mark Constable

On 24/08/15 08:19, Stéphane Graber wrote:

I'm not aware of any change there. Can you manually run:
 sudo lxd --group lxd --debug


Thanks for looking into this and apologies to everyone else for a large dump.

Just 2 containers, this is the above startup then a lxc list then a lxc start 
goldcoast.org

The 2nd part is what journalctl -f provided at the same time. ATM neither lxcfs 
or
dnsmasq are automatically started up. lxcbr0 is up because I manually brought 
it up.

~ sudo lxd --group lxd --debug
INFO[08-24|11:44:38] LXD is starting  path=/var/lib/lxd
INFO[08-24|11:44:38] Default uid/gid map:
INFO[08-24|11:44:38]  - u 0 10 65536
INFO[08-24|11:44:38]  - g 0 10 65536
INFO[08-24|11:44:38] Init 
driver=storage/btrfs
DBUG[08-24|11:44:38] Container load   
container=gc5.goldcoast.org
DBUG[08-24|11:44:38] Applying limits.memory: 500M
DBUG[08-24|11:44:38] Container load   
container=goldcoast.org
DBUG[08-24|11:44:38] Applying limits.memory: 500M
DBUG[08-24|11:44:38] Container load   
container=goldcoast.org
DBUG[08-24|11:44:38] Applying limits.memory: 500M
DBUG[08-24|11:44:38] Container load   
container=gc5.goldcoast.org
DBUG[08-24|11:44:38] Applying limits.memory: 500M
INFO[08-24|11:44:38] Looking for existing certificates:   
cert=/var/lib/lxd/server.crt key=/var/lib/lxd/server.key
INFO[08-24|11:44:38] LXD isn't socket activated
INFO[08-24|11:44:38] REST API daemon:
INFO[08-24|11:44:38]  - binding socket
socket=/var/lib/lxd/unix.socket
INFO[08-24|11:44:38]  - binding socket
socket=192.168.0.2:8443
INFO[08-24|11:45:06] handling method=GET 
url=/1.0/containers?recursion=1 ip=@
DBUG[08-24|11:45:06] Container load   
container=gc5.goldcoast.org
DBUG[08-24|11:45:06] Applying limits.memory: 500M
DBUG[08-24|11:45:06] Container load   
container=goldcoast.org
DBUG[08-24|11:45:06] Applying limits.memory: 500M
DBUG[08-24|11:45:06]
{
type: sync,
status: Success,
status_code: 200,
metadata: [
{
state: {
architecture: 0,
config: {
volatile.base_image: 
7f95f1933d1a3da7a496a36540aefeb6e6a15a1b0c934da51152086528ee5f8c,
volatile.eth0.hwaddr: 
00:16:3e:c9:56:ce
},
devices: {},
ephemeral: false,
expanded_config: {
volatile.base_image: 
7f95f1933d1a3da7a496a36540aefeb6e6a15a1b0c934da51152086528ee5f8c,
volatile.eth0.hwaddr: 
00:16:3e:c9:56:ce
},
expanded_devices: {
eth0: {
hwaddr: 
00:16:3e:c9:56:ce,
nictype: bridged,
parent: lxcbr0,
type: nic
}
},
log: ,
name: gc5.goldcoast.org,
profiles: [
default
],
status: {
status: STOPPED,
status_code: 1,
init: 0,
ips: null
},
userdata: 
},
snaps: null
},
{
state: {
architecture: 0,
config: {
volatile.base_image: 
cf110dae6189b5983292d52cd13cd236aa5052bdeeccdd9c08e167bd3f6675e7,
volatile.eth0.hwaddr: 
00:16:3e:c7:95:53,
volatile.last_state.idmap: 

[lxc-users] lxd 0.16 lxcbr0 not starting

2015-08-22 Thread Mark Constable

I just did an update on a 15.10 host and a new version of lxd and lxd-client
showed up but on reboot my lxcbr0 bridge does not get created, as it had been
up till now. The only journald error I can see is...

Aug 23 15:39:29 mbox lxd[2980]: error: cannot listen on https socket: listen 
tcp 192.168.0.2:8443: bind: cannot assign requested address

~ netstat -tanup | grep :8443
[nada]

~ egrep -v ^(#|$) /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

~ egrep -v ^(#|$) /etc/default/lxc-net
USE_LXC_BRIDGE=true
LXC_BRIDGE=lxcbr0
LXC_ADDR=192.168.0.2
LXC_NETMASK=255.255.255.0
LXC_NETWORK=192.168.0.0/24
LXC_DHCP_RANGE=192.168.0.3,192.168.0.99
LXC_DHCP_MAX=96
LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf
LXC_DOMAIN=goldcoast.org

I've tried commenting out the LXC_DHCP_CONFILE field in case something was
preventing dnsmasq from starting up but that didn't help.

Any hints as to what may have changed with 0.16 initialization?


Also, the linuxcontainers.org News page says this for 0.16...

Added container auto-start support, includes start delay and start ordering

but I can't find any info about how to take advantage of unprivileged auto
startup. How do I configure the above (once I get lxd working again)?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxd error saving config file for the container failed

2015-08-16 Thread Mark Constable
Just updated kubuntu 15.10 and rebooted and now getting this error...

~ lxc start goldcoast.org --debug
DBUG[08-16|20:46:19] fingering the daemon 
DBUG[08-16|20:46:19] raw response: 
{type:sync,status:Success,status_code:200,metadata:{api_compat:1,auth:trusted,config:{core.https_address:192.168.0.2:8443,core.trust_password:true},environment:{architectures:[2,1],backing_fs:btrfs,driver:lxc,driver_version:1.1.3,kernel:Linux,kernel_architecture:x86_64,kernel_version:4.1.0-3-lowlatency,version:0.15}}}
 
DBUG[08-16|20:46:19] pong received 
DBUG[08-16|20:46:19] putting {action:start,force:false,timeout:-1}
 to http://unix.socket/1.0/containers/goldcoast.org/state 
DBUG[08-16|20:46:19] raw response: 
{type:async,status:OK,status_code:100,operation:/1.0/operations/63d04259-06f8-49f2-b0d7-ff6aa2ff731e,resources:null,metadata:null}
 
DBUG[08-16|20:46:19] 1.0/operations/63d04259-06f8-49f2-b0d7-ff6aa2ff731e/wait 
DBUG[08-16|20:46:19] raw response: 
{type:sync,status:Success,status_code:200,metadata:{created_at:2015-08-16T20:46:19.009675961+10:00,updated_at:2015-08-16T20:46:19.010560679+10:00,status:Failure,status_code:400,resources:null,metadata:saving
 config file for the container failed,may_cancel:false}}
 
error saving config file for the container failed

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd error saving config file for the container failed

2015-08-16 Thread Mark Constable
On Sunday, August 16, 2015 08:50:54 PM Mark Constable wrote:
 Just updated kubuntu 15.10 and rebooted and now getting this error...
 error saving config file for the container failed

I just did an strace lxc start goldcoast.org and it seems to be getting stuck 
using the API so I suspect my config is now out of whack with a recent wily
package update.

{core.https_address:192.168.0.2:8443,core.trust_password:true},

What exactly should be the value of core.https_address with LXD 0.15 ?

And where do I find a list of all config variables?

https_address does not exist on this page...

https://github.com/lxc/lxd/blob/master/specs/rest-api.md

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [SOLVED] lxd not providing port 8443

2015-08-12 Thread Mark Constable
 And indeed it does so lxd config set core.https_address 192.168.0.2 got
 me a port :8443 on my local host as soon as I hit enter, as advertised.

However after a reboot I got an error about no port for the config option
and lxd refused to start which made it awkward to update the config option
so I manually modified /var/lib/lxd/lxd.db guessing I needed to add :8443
and that worked (for 0.15)...

sudo sqlite3 /var/lib/lxd/lxd.db \
 update config set value='192.168.0.2:8443' where key='core.https_address'

So my previous example needs to be modified as below otherwise you can end
up with a non-functional lxd deamon that refuses to start...

lxd config set core.https_address 192.168.0.2:8443
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] [SOLVED] lxd not providing port 8443

2015-08-12 Thread Mark Constable
On Tuesday, August 11, 2015 10:25:47 PM Kevin LaTona wrote:
  However, neither my local or remote test machines have anything running on
  port 8443. Is there some trick to start lxd plus access via port 8443?

Yes, there is :-)

 There was an issue like this back around 0.8 but it was fixed.

Hey Kevin, thanks for your help. I eventually sorted it out and it was simply
my lack of attention to recent developments of LXD 0.15 released a week ago.

According to - https://linuxcontainers.org/lxd/news/

The --tcp daemon option has been replaced by the core.https_address option
allowing users to change the address and port LXD binds to. Changes are now
applied immediately.

And indeed it does so lxd config set core.https_address 192.168.0.2 got me
a port :8443 on my local host as soon as I hit enter, as advertised.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxd not providing port 8443

2015-08-11 Thread Mark Constable
I have 2 *buntu 15.10 hosts and my local one has a few trusty, utopic and
wily containers. I've just updated a local LAN remote NAS to wily (so both
ends run the same version of lxd/lxc) and want to test copying and migration.

However, neither my local or remote test machines have anything running on
port 8443. Is there some trick to start lxd plus access via port 8443?


~ p lxd
root  2098  0.0  0.1 380028 19756 ?Ssl  Aug11   0:02 /usr/bin/lxd 
--group lxd --logfile=/var/log/lxd/lxd.log
root  2123  0.0  0.0 213924  6352 ?Ss   Aug11   0:00 [lxc monitor] 
/var/lib/lxd/containers gc1
root  5803  0.0  0.0 213924  4308 ?Ss   Aug11   0:00 [lxc monitor] 
/var/lib/lxd/containers gc5

~ lxc list
+---+-+-+--+---+---+
|   NAME|  STATE  |IPV4 | IPV6 | EPHEMERAL | SNAPSHOTS |
+---+-+-+--+---+---+
| gc4   | STOPPED | |  | NO| 0 |
| gc5   | RUNNING | 192.168.0.5 |  | NO| 0 |
| gc6   | STOPPED | |  | NO| 0 |
| gc1   | RUNNING | 192.168.0.3 |  | NO| 0 |
+---+-+-+--+---+---+

~ sudo netstat -tanup | grep 8443
[... nothing ...]

From the remote back to my local host...

~ lxc remote add mbox https://mbox:8443 --debug
DBUG[08-12|14:08:45] Error reading the server certificate for mbox:
  open /home/markc/.config/lxc/servercerts/mbox.crt: no such file or directory
 
DBUG[08-12|14:08:45] fingering the daemon 
error Get https://mbox:8443/1.0: Unable to connect to: mbox:8443


There is no firewall between them (from remote LAN NAS back to my laptop).

~ nmap mbox

Starting Nmap 6.47 ( http://nmap.org ) at 2015-08-12 14:11 AEST
Nmap scan report for mbox (192.168.0.2)
Host is up (0.00032s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
53/tcp   open  domain
111/tcp  open  rpcbind
2049/tcp open  nfs

Nmap done: 1 IP address (1 host up) scanned in 0.07 seconds

And fwiw this is the wily container running on my local host scanned
from the remote LAN NAS (ie; no fundamental networking issues)...

~ nmap gc1

Starting Nmap 6.47 ( http://nmap.org ) at 2015-08-12 14:12 AEST
Nmap scan report for gc1 (192.168.0.3)
Host is up (0.00032s latency).
Not shown: 994 closed ports
PORTSTATE SERVICE
22/tcp  open  ssh
25/tcp  open  smtp
80/tcp  open  http
443/tcp open  https
465/tcp open  smtps
993/tcp open  imaps

Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Correct way to rsync /var/lib/lxd

2015-08-08 Thread Mark Constable
I'm just trying to do a simple rsync backup of /var/lib/lxd/ to a local NAS and 
get
these errors as root on the host. I understand the issue of subgids and subuids 
not
being valid on the target machine (the below errors are coming from the 
destination)
outside the normal range of uid/gids but I'm still not sure of the best flags, 
if any,
to provide for rsync to get around this? Or is there a better way to backup 
containers?


rsync: delete_file: rmdir(lxc/container1/rootfs) failed: Permission denied (13)
rsync: delete_file: unlink(lxc/container1/metadata.yaml) failed: Permission 
denied (13)
cannot delete non-empty directory: lxc/container1
rsync: delete_file: rmdir(lxc/container1) failed: Permission denied (13)
rsync: delete_file: unlink(lxc/lxc-monitord.log) failed: Permission denied (13)
cannot delete non-empty directory: lxc
rsync: failed to set times on /mbox/var/lib/lxd/unix.socket (in backups): 
Operation not permitted (1)
rsync: recv_generator: mkdir /mbox/var/lib/lxd/containers (in backups) 
failed: Permission denied (13)

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Correct way to rsync /var/lib/lxd

2015-08-08 Thread Mark Constable
On Sat, 8 Aug 2015 06:32:26 PM Serge Hallyn wrote:
  I'm just trying to do a simple rsync backup of /var/lib/lxd/ to a local NAS
 
 Not valid - is that because the NAS only supports a single uid, i.e. it's vfat
 or somesuch?  Or does it support a smaller range?

No it's a ubuntu 15.04 server based HP microserver running btrfs RAID10.
 
  rsync: recv_generator: mkdir /mbox/var/lib/lxd/containers (in backups)
  failed: Permission denied (13)

What I neglected to mention is that I am using rsync:: direct without going via
SSH and there is my (doh, brainfart) problem. The remote rsyncd daemon is 
running
as a single non-root user and I obviously need to rsync as root... via SSH!

Yes, doing that as I type, problem solved.

 You could just tar up each container directory, or the whole thing, and back
 that up.

Gladly I won't have to resort to that compared to an incremental rsync via cron.

A few questions though...

Should I be able to use btrfs send/receive?

Would that be more efficient than rsync?

Any other native methods for backing up containers?

Thanks for your help and sorry about the noise.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD container how to start network?

2015-07-23 Thread Mark Constable
On Thursday, July 23, 2015 01:32:33 PM Fajar A. Nugraha wrote:
  How do you folks get a working network with current wily containers?
 
 IIRC networking on unprivileged systemd container was broken due to
 some changes in systemd. This is true even on current release (vivid).

Yep, something fundamental with systemd is not working...

~ lxc stop wily # just hangs
~ lxc exec wily -- bash
root@wily:~# halt
Failed to talk to init daemon.

 If this hasn't been fixed, then you can't use unprivileged wily as-is,
 regardless whether dbus is installed or not. There are workarounds for
 this (including creating custom systemd units).

I could have sworn a wily container I tested a month ago worked ootb,
both networking and the ability to shutdown without hanging.

 You should always be able to configure network manually though (e.g.
 ifconfig). Another workaround is to use lxc-attach -s, and ommit
 network namespace, so that the container would use host's networking,
 thus you will be able to install packages using apt-get.

That's a bizarre workaround. It looks like the best workaround for now
is to stick to utopic containers and try again in another month.

Thanks Fajar.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD container how to start network?

2015-07-23 Thread Mark Constable
On Thursday, July 23, 2015 02:51:56 PM Fajar A. Nugraha wrote:
  That's a bizarre workaround. It looks like the best workaround for
  now is to stick to utopic containers and try again in another month.
 
 I believe I've said this before, but you'd better not use utopic as it's
 EOLed today. To use upstart instead of systemd, either stick with trusty,

That's also painful because then I have to start adding various PPA's to be
able to use later versions of nginx, php, mariadb et al. One of the reasons
for me wanting to use wily is to get myself skilled up and ready to deploy
systemd server hosts in general, aside from systemd based containers, so
going backwards is not my ideal solution.

 or use vivid with upstart-sysv.

And that one could be workable but I'm stuck with the same catch-22 in that
I need the network to be working before being able to update and install any
new packages!

Sure, I can manually do this for a single install but I've been developing a
shell script to automate multiple installs from just after lxc launch through
to a fully configure WordPress ready for client usage. I was hoping that
unprivileged LXD containers with systemd would have stabilized by now but
obviously not.

Anyway, thanks again for your suggestions.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD container how to start network?

2015-07-22 Thread Mark Constable
*buntu wily host and unprivileged lxd containers. This used to work but as
you can see I seem to need dbus... on a headless server!

lxc exec w1 -- bash -c 'systemctl restart networking.service'
Failed to get D-Bus connection: No such file or directory

Using a wily image from today, how do I start networking inside a container?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to increase space of existing container

2015-07-21 Thread Mark Constable
On Monday, July 20, 2015 08:44:12 AM Mahesh Pujari wrote:
 I had created a container with default settings and now it fails to start
 due to space issues, how can I increase space of a container which has
 exhausted its space.

Most likely you have to deal with this on a lower filesystem or hardware level
to increase the space of the current partition or subvolume where the lcx root
system is stored. However if you just need to get unstuck and get the container
to start again so you can take control of it then you can clear out /var/log
and /var/tmp fairly safely and that might get you enough wiggle roon to at least
start and get back into the container. Something like this could help, as 
root...

find /var/lib/lxc/c1/rootdev/var/log/ -type f -exec rm {} \;
rm -rf /var/lib/lxc/c1/rootdev/var/tmp/*
rm -rf /var/lib/lxc/c1/rootdev/tmp/*


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] multiple containers on the same vlan

2015-06-15 Thread Mark Constable
On Mon, 15 Jun 2015 10:33:14 AM Genco Yilmaz wrote:
  Is there any way to link multiple containers without using softbridge
  +veth pair by using network.type vlan? or what is the best practice
  in this type of topology?

If you just want to expose containers to the hosts private or public network
then you don't need to use a vlan. Just tweak the lxcbr0 host bridge to sit
on the same network as the host and your container network settings can work
without alteration and all containers are accessible from the hosts network
and also each other. For instance I use this on a ubuntu systemd host (because
I'm not smart enough to know how to do this part properly) in /etc/rc.local...

sleep 1  {
  ifconfig eth0 up
  sleep 1
  brctl addif lxcbr0 eth0
  sleep 1
  route add default gw 192.168.0.1
  echo 
nameserver 192.168.0.2
search mydomain.com  /etc/resolv.conf
}

~ cat /etc/default/lxc-net
USE_LXC_BRIDGE=true
LXC_BRIDGE=lxcbr0
LXC_ADDR=192.168.0.2
LXC_NETMASK=255.255.255.0
LXC_NETWORK=192.168.0.0/24
LXC_DHCP_RANGE=192.168.0.2,192.168.0.54
LXC_DHCP_MAX=53
LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf
LXC_DOMAIN=mydomain.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD v0.11 and vivid containers

2015-06-09 Thread Mark Constable
I was excited to see 0.11 arrive on a wily host and thought it might
solve a few of the issues with systemd-journal not starting up and
therefor neither the network but it doesn't seem so. In fact after a
launch of todays vivid image, which starts, there is no IP (as before)
and I can't exec bash into it or stop it without killing the lxc exec
PID. Nothing interesting in /var/lib/lxd/lxc/lxc-monitord.log

Update: well well, on a hunch I checked to see if there is a wily
image and there is so I launch one and it lists with an IP (cool)
but I still can't exec into or stop it.

liblxc11.1.2-0ubuntu3
lxc1.1.2-0ubuntu3
lxcfs  0.9-0ubuntu1
lxd0.11-0ubuntu1
lxd-client 0.11-0ubuntu1

Not whinging, just FWIW.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd: -B backingstore equivalent?

2015-06-05 Thread Mark Constable
On Fri, 5 Jun 2015 05:05:21 PM Serge Hallyn wrote:
Does this mean that btrfs is considered a second class option
   
   It is, for a few reasons.
  
  Sorry to persist with this but would you mind elaborating briefly on
  some of those reasons or point me to further discussion please?
 
 We didn't want to depend on a single fs.  Also, btrfs still has some
 performance issues (esp at fsync, which kills apt-get),

I suspect a lot of performance issues revolve around unbalanced systems.

 and people still seem to hit corruption with it (though other people
 seem to run it rock-solid with no issues).

Older war stories mostly revolve around folks letting their btrfs systems
get to 100% full and/or involve earlier series 3 kernels and those earlier
bad experiences are still being used as a reason why btrfs is not ready.

  I have invested heavily in btrfs so I am a little shocked at this
  news. If I want to stick to btrfs then would I be better off relying
  on legacy lxc?
 
 I don't think we'll be dropping the support we have.

Sure, I wouldn't expect that, but it means that most future devel, testing,
tutorials and example setups will be based on LVM instead of btrfs and
that concerns me (not that my concerns matter in the real world.)

 We definately won't be adding support for zfs, overlayfs, etc.

Good.

 Can you say a bit more about how your usage depends on btrfs?

I can't compare btrfs to LVM because I've been using btrfs for so long
now that I have forgotten all I knew about LVM... and very glad of that
because btrfs is so much simpler and more flexible.

I have a couple of dozen personal and professional systems and all run
utopic and btrfs. The busiest server with 1000s of clients and 100's
of vhost domains has been up for 6 months without any problems other
than initial performance issues because the fs needed to be rebalanced.
Once that was done, and once a month, it's been perfectly satisfactory.
I also got caught out with sparse sqlite3 databases from Dspam but once
they were regularly vacuumed that problem disappeared. I didn't notice
that particular problem on the previous ext4/dell-raid system.

Personally, my own pair of HP microservers for local backup were
renovated from zfs to btrfs 3 months ago and have been working perfectly.
Again, particularly so since being rebalanced. The ease of management
and flexibility, especially being able to use send/receive to sync
them, is just not (so easily) available without btrfs.

The key points over LVM is being able to use disks of any size, online
transition of raid personalities, file system (not hardware) level
checksumming and... subvolumes.

I guess my usage depends on btrfs is because of it's ease of use and
flexibility to cover everything from a single laptop SSD through to
various RAID configurations but short of enterprise level openstack-like
systems. There the extra stability and performance of LVM is justified
in 2015 (maybe 2016) but short of that fairly lofty niche enterprise
level of need, this year, I believe btrfs is an overall superior fs
solution and a perfect fit for lxc/lxd.

Obviously IMHO.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD logo usage rights

2015-06-04 Thread Mark Constable
What is the status of redistribution rights for the LXD logo?

I want to start a series of blog posts about my experiences with LXD
so the current logo would be nice to re-use but I don't want Canonical
lawyers, or anyone, to come after me with nasty takedown notices.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc snapshot

2015-06-04 Thread Mark Constable
Is there a tutorial or blog post about using lxc snapshot anywhere?

~ lxc version
0.10

~ lxc list
++-+---+--+---+---+
|NAME|  STATE  | IPV4  | IPV6 | EPHEMERAL | SNAPSHOTS |
++-+---+--+---+---+
| sysadm | RUNNING | 192.168.0.3   |  | NO| 0 |
| ubuntu.lxc | RUNNING | 192.168.0.216 |  | NO| 1 |
++-+---+--+---+---+

~ lxc snapshot ubuntu.lxc ubuntu_101.lxc

The snapshot does not show up in the above lxc list and I'm not sure how I
would go about starting it... presuming one can treat a snapshot just like
the parent container (maybe I misunderstand the purpose of a snapshot.)

~ lxc info ubuntu.lxc
Name: ubuntu.lxc
Status: RUNNING
Init: 22167
Ips:
  eth0:  IPV4   192.168.0.216
  lo:IPV4   127.0.0.1
  lo:IPV6   ::1
Snapshots:
  ubuntu.lxc/ubuntu_101.lxc

Questions:

- can I use a snapshot like the parent container?
- if so how to start and stop it?
- if not what is the best method to create 100's of containers?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC multi distribution installer boot2lxc VM based on Alpine Linux

2015-06-03 Thread Mark Constable
On Thu, 4 Jun 2015 01:33:03 AM Tobby Banerjee wrote:
 https://www.flockport.com/start

What's with this login first crap?

https://www.flockport.com/download/flockport-install.tar.xz

I don't appreciate being spammed on a public mailing list like this no
matter how good you think your intentions are.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD can ping from container out, but not in from outside network

2015-05-16 Thread Mark Constable
On Sat, 16 May 2015 08:03:26 PM Kevin LaTona wrote:
 With a LXD based LXC container what iptables magic does one need to
 be able to access these 10.0.3.x containers from outside that local
 network?
 
 So far I got it so I log into a 10.0.3.x based container and ping the
 outside world.

The last couple of emails I sent were all about addressing this problem.

The default 10.0.3.x based container networking uses NAT, the same as
your 192.168.x.x network is to the outside world via your router. The
easiest solution I am aware of is to change the default lxcbr0 to use
the same 192.168.x.x network segment as your host and then any other
host on your 192.168.x.x network can see any of the containers. Then
you can also make a container visible to the outside world using normal
port forwarding on your main router.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD assigning static IP's at start tp containers

2015-05-16 Thread Mark Constable
On Fri, 15 May 2015 10:54:08 PM Kevin LaTona wrote:
 I was reading about ways in legacy LXC of being able to have the DHCP server
 assign static IP's to containers at startup based upon container name.
 If one is using Ubuntu 15.04, systemd and LXD is that still possible?

Hey Kevin, I just set something like this up and although this may not
be the right way to do it works for my situation which sounds somewhat
similar to what you are after. Best I should show my relevant config
files and some of this may help you or provide some ideas...

My main gateway/wireless/dhcp router is 192.168.0.1

~ grep -Ev ^(#|$) /etc/default/lxc-net
USE_LXC_BRIDGE=true
LXC_BRIDGE=lxcbr0
LXC_ADDR=192.168.0.2
LXC_NETMASK=255.255.255.0
LXC_NETWORK=192.168.0.0/24
LXC_DHCP_RANGE=192.168.0.2,192.168.0.54
LXC_DHCP_MAX=53
LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf
LXC_DOMAIN=example.org

And the magic to fill in the gaps above...

~ cat /etc/rc.local
sleep 5  {
 brctl addif lxcbr0 eth0
 sleep 1
 route add default gw 192.168.0.1
 echo nameserver 8.8.8.8  /etc/resolv.conf
}
exit 0

~ cat /etc/lxc/dnsmasq.conf
dhcp-host=sysadm,192.168.0.3
dhcp-host=markc,192.168.0.4


I also remove ifupdown and resolvconf and set all my NetworkManager
interfaces not to autoconnect=false so if I need to switch to wifi
when moving my laptop away from an eth cable I can ifconfig down lxcbr0
select a wifi connection.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc.aa_allow_incomplete in vivid containers

2015-05-07 Thread Mark Constable
I thought I'd try going back to normal privileged containers which will at
least (or did pre-systemd) autostart. The only change from defaults is my
own br0 to put the containers on my local network...

~ grep br0 /etc/lxc/*
/etc/lxc/default.conf:lxc.network.link = br0

And on 15.04 I've done a simple...

~ add-apt-repository ppa:ubuntu-lxc/daily
~ lxc-create -t ubuntu -n test
~ lxc-start -F -n test

lxc-start: lsm/apparmor.c: apparmor_process_label_set: 169 If you really 
want to start this container, set
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 170 
lxc.aa_allow_incomplete = 1
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 171 in your container 
configuration file
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 4
lxc-start: start.c: __lxc_start: 1178 failed to spawn 'test'
lxc-start: cgmanager.c: cgm_remove_cgroup: 523 call to cgmanager_remove_sync 
failed: invalid request
lxc-start: cgmanager.c: cgm_remove_cgroup: 525 Error removing all:lxc/test-2
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by 
setting the --logfile and --logpriority options.


Do I really have to add lxc.aa_allow_incomplete = 1 to 
/var/lib/lxc/test/config?

When I've fiddled with this in the past I seem to get all sorts of other
problems within the container, like can't shut down etc?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc memory limit doesn't work

2015-04-29 Thread Mark Constable

On 29/04/15 17:17, Fırat KÜÇÜK wrote:

lxc.cgroup.memory.limit_in_bytes = 2048M
but my container free -h output shows 32GB
Is there anything that i missed?


For me on ubuntu I had to add this to my default grub line and reboot...

~ grep cgroup /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT=quiet splash cgroup_enable=memory swapaccount=1

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Btrfs gquota and container disk usage limits

2015-04-05 Thread Mark Constable

On 05/04/15 02:22, david.an...@bli.uzh.ch wrote:

I see that nobody replied to your question.
Have you made any progress in the meantime?


Not really. I've actually dropped back to using pre-systemd trusty in
privileged containers until ubuntu 15.04 is released hoping that might
provide better support for systemd/lxd/unprivileged vivid containers.


Also, how did you manage to have your containers v1 and v2 installed
 into a btrfs subvolume? With lxc-create it's clear, but with lxc launch?


Looks like lxd automatically creates part of the filesystem as a subvolume
but I'm not sure how rootfs/var/lib/machines fits into the picture, so no,
I never figured out how to properly set up a lxd container as a subvolume
let alone how to provide real world usable qgroup quotas. Like I say, I am
hoping vivid comes with usable lxd containers and decent documentation.

~ btrfs sub list / | grep lxd
ID 2403 gen 554661 top level 1674 path 
var/lib/lxd/lxc/v1/rootfs/var/lib/machines
ID 2407 gen 556386 top level 1674 path 
var/lib/lxd/lxc/v2/rootfs/var/lib/machines

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] color in console not working in unprivileged containers

2015-03-21 Thread Mark Constable

On 22/03/15 02:12, Michael H. Warfield wrote:

Append linux to the end of those lines in each file so it reads like
this (for tty1.conf)

exec /sbin/getty -8 38400 tty1 linux


Wonderful thorough explanation, thanks Mike.

FWIW it's also possible to do something like this too (not documented in
the man pages from lxd-daily/vivid (1.1.0-0ubuntu1))...

lxc-attach -n mycontainer -v TERM=linux

and --clear-env seems to remove host env vars and without a .bashrc or
.profile there is not even a TERM var and sure enough there are no colors
in a ls listing (oic, set shows a TERM=dumb)...

~ lxc-attach -n v1 --clear-env
root@v1:/root# env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/root
SHLVL=1
container=lxc
_=/usr/bin/env

And also FWIW these are my 2 favorite aliases of late...

alias a='lxc-attach --clear-env -v USER=root -v HOME=/root -v TERM=linux -v 
LANG=en_US.UTF-8 -v LC_ALL=en_US.UTF-8 -n'

alias lx='lxc-ls -f'

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers not shutting down

2015-03-20 Thread Mark Constable

On 21/03/15 01:27, Serge Hallyn wrote:

Since the past 2 days of updates I notice my lxc and lxd containers just hang
when trying to shut them down, consequently rebooting my laptop requires a
hard power button reset (so far btrfs on a SSD has survived). Also my main
system privileged container does not provide any output for free and top and
even an apt-get upgrade just hangs (hence I can't get systemd 219-4ubuntu7).


Do any of the tasks in the container stick around after shutdown?


I'm not sure how to determine that because the container is pretty well unusable
at this point.

root@v1 ~ shutdown -h now
Failed to start poweroff.target: Connection timed out

Broadcast message from root@ns on pts/4 (Sat 2015-03-21 10:55:28 AEST):

The system is going down for power-off NOW!

root@v1 ~ ps aux
[hangs]

From the host I get this but I'm not sure how to analyze any of the info here...

root@mbox ~ ll /sys/fs/cgroup/devices/lxc/v1-1/system.slice/
total 0
drwxr-xr-x 2 root root 0 Mar 20 22:47 -.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 console-getty.service/
drwxr-xr-x 2 root root 0 Mar 20 22:49 courier-authdaemon.service/
drwxr-xr-x 2 root root 0 Mar 20 22:49 courier-imap-ssl.service/
drwxr-xr-x 2 root root 0 Mar 20 22:49 courier-imap.service/
drwxr-xr-x 2 root root 0 Mar 20 22:49 courier-mta-ssl.service/
drwxr-xr-x 2 root root 0 Mar 20 22:49 courier-mta.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 cron.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 dev-hugepages.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 dev-lxc-console.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 dev-lxc-tty1.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 dev-lxc-tty2.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 dev-lxc-tty3.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 dev-lxc-tty4.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 dev-mqueue.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 getty-static.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 ifup-wait-all-auto.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 lvm2.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 memcached.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 mysql.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 networking.service/
drwxr-xr-x 2 root root 0 Mar 20 22:51 nginx.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 nscd.service/
drwxr-xr-x 2 root root 0 Mar 20 22:49 ondemand.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 php5-fpm.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 proc-cpuinfo.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 proc-meminfo.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 proc-stat.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 proc-sysrq\x2dtrigger.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 proc-uptime.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 rc-local.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 resolvconf.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 rsyslog.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 ssh.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 sys-devices-virtual-net.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 sys-fs-fuse-connections.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 sys-kernel-debug.mount/
drwxr-xr-x 2 root root 0 Mar 20 22:47 sysstat.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 system-container\x2dgetty.slice/
drwxr-xr-x 2 root root 0 Mar 20 22:47 system-getty.slice/
drwxr-xr-x 2 root root 0 Mar 20 22:47 systemd-journal-flush.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 systemd-journald.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 systemd-random-seed.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 systemd-remount-fs.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 systemd-tmpfiles-setup.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 systemd-update-utmp.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 systemd-user-sessions.service/
drwxr-xr-x 2 root root 0 Mar 20 22:47 udev-finish.service/
-rw-r--r-- 1 root root 0 Mar 20 22:47 cgroup.clone_children
-rw-r--r-- 1 root root 0 Mar 20 22:47 cgroup.procs
--w--- 1 root root 0 Mar 20 22:47 devices.allow
--w--- 1 root root 0 Mar 20 22:47 devices.deny
-r--r--r-- 1 root root 0 Mar 20 22:47 devices.list
-rw-r--r-- 1 root root 0 Mar 20 22:47 notify_on_release
-rw-r--r-- 1 root root 0 Mar 20 22:47 tasks

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Btrfs gquota and container disk usage limits

2015-03-19 Thread Mark Constable

I've tried to see if btrfs gquota applied to the btrfs subvolumes
automatically created with both lxc-create and lxc create would
be reflected within a container via df but no such luck yet. I'm
just wondering if I am doing this right and/or any hints I may be
missing?

Or is this another thing that is not ready yet?

Ubuntu 15.04 host and containers, 1 system, 2 unprivileged lxd...

~ btrfs sub list /
ID 1674 gen 560623 top level 5 path @
ID 1675 gen 560623 top level 5 path @home
ID 2388 gen 551854 top level 1674 path 
var/lib/lxc/sysadm.lan/rootfs/var/lib/machines
ID 2389 gen 551882 top level 1674 path var/lib/machines
ID 2403 gen 554661 top level 1674 path 
var/lib/lxd/lxc/v1/rootfs/var/lib/machines
ID 2407 gen 556386 top level 1674 path 
var/lib/lxd/lxc/v2/rootfs/var/lib/machines

btrfs quota enable /
btrfs quota rescan /
btrfs qgroup limit 5G /var/lib/lxc/sysadm.lan/rootfs/var/lib/machines
btrfs qgroup limit 1G /var/lib/lxd/lxc/v1/rootfs/var/lib/machines
btrfs qgroup limit 5G /var/lib/lxd/lxc/v2/rootfs/var/lib/machines
btrfs quota rescan /

~ btrfs qgroup show / -re
qgroupid rferexclmax_rfer   max_excl
    
0/5  16384   16384   0  0
0/93014095704064 14095704064 0  0
0/93160293865472 60293865472 0  0
0/1674   25438732288 25438732288 0  0
0/1675   42481688576 42481688576 0  0
0/2388   16384   16384   5368709120 0
0/2403   16384   16384   1073741824 0
0/2407   16384   16384   5368709120 0
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] [LXD] lxc file push/pull

2015-03-18 Thread Mark Constable

https://linuxcontainers.org/lxd/getting-started-cli/ says...

To pull a file from the container, use:
lxc file pull /etc/hosts .

To push one, use:
lxc file push hosts /tmp

but how can lxc file push|pull know what container to operate on?

This does not seem to clarify the issue...

~ lxc help file
Manage files on a container.

lxc file push [--uid=UID] [--gid=GID] [--mode=MODE] source [source...] 
target
lxc file pull source [source...] target

I would kind of expect an explicit 4th container argument.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [LXD] how to add cgroup options using lxc

2015-03-18 Thread Mark Constable

On 18/03/15 16:20, Guido Jäkel wrote:

lxc.cgroup.cpuset.cpus = 7


is this proven to work as intended?
[...]
 your example would mean to select the single core #7.


That is intended for testing. I find that the last cpu seems to be
the least used (from simple top tests) so I figure it's a reasonable
choice on my i7 laptop. This is mainly what I want to see...

root@v1:~# grep processor /proc/cpuinfo
processor   : 0

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] [LXD] how to add cgroup options using lxc

2015-03-17 Thread Mark Constable

Ubuntu vivid w/ lxd-daily.

When setting up unprivileged containers via lxc-create these cgroup
settings come from ~/.config/lxc/default.conf and for the most part
seem to work.
 
lxc.cgroup.memory.soft_limit_in_bytes = 256M

lxc.cgroup.memory.limit_in_bytes = 512M
lxc.cgroup.memory.memsw.limit_in_bytes = 256M
lxc.cgroup.cpu.shares = 256
lxc.cgroup.blkio.weight = 500
lxc.cgroup.cpuset.cpus = 7

How do I translate the above settings for the lxc config command so
these same values end up in /var/lib/lxd/lxd.db?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Using Docker Hub images with LXC

2014-10-21 Thread Mark Constable

On 22/10/14 12:27, CDR wrote:

https://github.com/robinmonjo/dlrootfs


Heh, this just made LXC twice as useful in one feel swoop and now
I have some reason to use my docker hub account! Many thanks.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Routable host IPs for containers

2014-09-14 Thread Mark Constable

This is probably dumb and asked a 1000 times before but I am totally
confused by various tutorials that are either to old or do not apply
to exactly my particular needs. For the purpose of ongoing/endless
testing I'd like containers to be given an IP in the same network
range via DHCP and my WIFI (wlan0) interface.

Most examples presume an eth0 interface but that does not exist for
my day to day laptop workstation.

. router 192.168.0.1 via laptop wlan0
. kubuntu 14.10 gets 192.168.0.209

A ubuntu uptopic container currently gets 10.0.3.164 via the lxcbr0
bridge with a gateway of 10.0.3.1. I can ping the outside world so
the default ubuntu lxc networking system works just fine. I've tried
a variety of combinations of br0 bridges and iptables rules but the
right combination to simply allow a container to get another DHCP
IP from my router on the same network as my main WIFI link so I can
access those containers from other computers on the same 192.168.0.0/24
network.

Any tutorials or hints on how to do this?

Maybe it's not possible for containers to appear on the same /24
network as the host so surely there must be a common way to at least
allow containers to be visible to other hosts on the same network segment?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] PHP Bindings?

2014-09-08 Thread Mark Constable

On 08/09/14 23:13, Leandro Ferreira wrote:

Hi, is anyone aware of any project with bindings for PHP?


You can use libvirt php bindings


Yep thanks. ATM I can't get it to compile on Ubuntu 14.10 and if I get
past that hurdle there doesn't seem to be much documentation to help
figure out how to use it. I've searched Github and have not found any
PHP project actually using libvirt-php so before I dig deeper into it
I wanted to know if there any more specific lxc-only bindings for PHP
and it seemed like this list would be the best place to ask.

Is anyone aware of any open source PHP projects using libvirt-php?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] PHP Bindings?

2014-09-07 Thread Mark Constable

Hi, is anyone aware of any project with bindings for PHP?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users