Re: [lxc-users] writeback cache for all container processes?

2015-02-02 Thread Tomasz Chmielewski

On 2015-02-02 21:37, Fajar A. Nugraha wrote:


It's certainly possible to do not applicable kinds of things with
processes and their page cache, i.e.:

https://code.google.com/p/pagecache-mangagement/ [1]

Or here, disabling O_DIRECT and sync would be sort of matching
feature-wise with KVM's cache=writeback:

http://www.mcgill.org.za/stuff/software/nosync [2]

Is it possible to set things like this for all processes in a given
lxc container?


What are you trying to achieve?


I'm trying to achieve the equivalent of KVM's cache=writeback (or, 
libeatmydata / nosync) for the whole container.




If you want to disable sync for the container, the best you can do is
probably use some filesystem that can do so. For example, zfs has
sync=disabled per-dataset settings. So you can have sync=standard
for filesystems used by the host, and sync=disabled for filesystems
used by containers.


That's a weird advice, given that lxc are Linux containers, and ZFS is 
not in Linux kernel (I know that there are some 3rd party porting 
attempts, but it's not really applicable in many situations).


--
Tomasz Chmielewski
http://www.sslrack.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxd: -B backingstore equivalent?

2015-06-05 Thread Tomasz Chmielewski

Is there a -B btrfs equivalent in lxd?

For example, with lxc, I would use:

# lxc-create --template download --name test-container -B btrfs

   -B backingstore
  'backingstore'  is  one  of  'dir',  'lvm', 'loop', 
'btrfs', 'zfs', or 'best'. The default is 'dir', meaning that the 
container root filesystem will be a directory under 
/var/lib/lxc/container/rootfs.



How can I do the same with lxd (lxc command)? It seems to default to 
dir.


# lxc launch images:ubuntu/trusty/amd64 test-container


--
Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd: -B backingstore equivalent?

2015-06-05 Thread Tomasz Chmielewski

On 2015-06-06 00:19, Tycho Andersen wrote:


# ls -l /var/lib/lxd
lrwxrwxrwx 1 root root 8 Jun  5 10:15 /var/lib/lxd - /srv/lxd


Ah, my best guess is that lxd doesn't follow the symlink correctly
when detecting filesystems. Whatever the cause, if you file a bug
we'll fix it, thanks.


Can you point me to the bug filing system for linuxcontainers.org?

The closest to contributing seems to be here:

https://linuxcontainers.org/lxd/contribute/

but don't see any report an bug, issue tracker or anything similar.


--
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd: -B backingstore equivalent?

2015-06-05 Thread Tomasz Chmielewski

On 2015-06-06 00:00, Tycho Andersen wrote:


As I've checked, this is not the case (the container is created in a
directory, not in btrfs subvolume; lxc-create -B btrfs creates it in a
subvolume).


Can you file a bug with info to reproduce? It should work as of 0.8.


Before I file a bug report - that's how it works for me - /var/lib/lxd/ 
is a symbolic link to /srv/lxd, placed on a btrfs filesystem:


# ls -l /var/lib/lxd
lrwxrwxrwx 1 root root 8 Jun  5 10:15 /var/lib/lxd - /srv/lxd

# mount|grep /srv
/dev/sda4 on /srv type btrfs 
(rw,noatime,device=/dev/sda4,device=/dev/sdb4,compress=zlib)



# lxc launch images:ubuntu/trusty/amd64 test-image
Creating container...done
Starting container...done
error: exit status 1

Note that it errored when trying to start the container - I have to add 
lxc.aa_allow_incomplete = 1; otherwise, it won't start (is there some 
/etc/lxc/default.conf equivalent for lxd, where this could be set?).


However, the container is already created in a directory, so I don't 
think the above error matters:


# btrfs sub list /srv|grep lxd
# btrfs sub list /srv|grep test-image
#


--
Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] kernel crash when starting an unprivileged container

2015-06-03 Thread Tomasz Chmielewski

On 2015-06-03 15:01, Tomasz Chmielewski wrote:

I'm trying to start an unprivileged container on Ubuntu 14.04;
unfortunately, the kernel crashes.

# lxc-create -t download -n test-container

(...)

# lxc-start -n test-container -F

Kernel crashes at this point.

It does not crash if I start the container as privileged.

- kernel used is 4.0.4-040004-generic from
http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.4-wily/


The issue was a bit weird:

- I've updated the kernel to 4.1-rc6, no longer crashing

- still, the container was not starting on 4.1-rc6

- it turned out that lxc-create -t download ... created the container 
with all files being 0-bytes for some reason (so, 0-byte /sbin/init and 
all other files being 0-byte)


- exec file format (0-byte /sbin/init) was causing 4.0.4 kernel crash?


Anyway, problem solved.


--
Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] kernel crash when starting an unprivileged container

2015-06-09 Thread Tomasz Chmielewski
It may be worth trying, but it won't work reliably for most kernel 
crashes (network, disk IO etc. may crash as well).


--
Tomasz Chmielewski
http://wpkg.org

On 2015-06-10 14:11, Christoph Lehmann wrote:

As a side note, you can use rsyslogs remotelogging to get the oops

Am 3. Juni 2015 08:01:22 MESZ, schrieb Tomasz Chmielewski
man...@wpkg.org:


I'm trying to start an unprivileged container on Ubuntu 14.04;
unfortunately, the kernel crashes.

# lxc-create -t download -n test-container
(...)
Distribution: ubuntu
Release: trusty
Architecture: amd64
(...)

# lxc-start -n test-container -F

Kernel crashes at this point.

It does not crash if I start the container as privileged.

- kernel used is 4.0.4-040004-generic from
http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.4-wily [1]/

- lxc userspace: http://ppa.launchpad.net/ubuntu-lxc/stable/ubuntu
[2]

# dpkg -l|grep lxc
ii liblxc1
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools (library)
ii lxc !

1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools
ii lxc-templates
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools (templates)
ii lxcfs
0.7-0ubuntu4~ubuntu14.04.1~ppa1 amd64 FUSE based filesystem
for LXC
ii python3-lxc
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools (Python 3.x bindings)

It's a bit hard to get the printout of the OOPS, as I'm only able to

access the server remotely and it doesn't manage to write the OOPS
to
the log.

Anyway, after a few crashes and while true; do dmesg -c ; done I
was
able to capture this:

[ 237.706914] device vethPI4H7F entered promiscuous mode
[ 237.707006] IPv6: ADDRCONF(NETDEV_UP): vethPI4H7F:!
link is
not ready
[ 237.797284] eth0: renamed from veth1OSOTS
[ 237.824526] IPv6: ADDRCONF(NETDEV_CHANGE): vethPI4H7F: link
becomes
ready
[ 237.824556] lxcbr0: port 1(vethPI4H7F) entered forwarding state
[ 237.824562] lxcbr0: port 1(vethPI4H7F) entered forwarding state
[ 237.928179] BUG: unable to handle kernel NULL pointer dereference
at
(null)
[ 237.928262] IP: [8122f888] pin_remove+0x58/0xf0
[ 237.928318] PGD 0
[ 237.928364] Oops: 0002 [#1] SMP
[ 237.928432] Modules linked in: xt_conntrack veth xt_CHECKSUM
iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat
nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack
xt_tcpudp iptable_filter ip_tables x_tables bridge stp llc
intel_rapl
iosf_mbi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel
kvm
crct10dif_pclmul crc32_pclmul eeepc_wmi ghash_clmulni_intel
aesni_intel
asus_wmi
sparse_keymap ie31200_edac aes_x86_64 edac_core lrw gf128mul
glue_helper shpchp lpc_ich ablk_helper cryptd mac_hid 8250_fintek
serio_raw tpm_infineon video wmi btrfs lp parport raid10 raid456
async_raid6_recov async_memcpy async_pq async_xor async_tx xor
raid6_pq
e1000e raid1 ahci raid0 ptp libahci pps_core multipath linear
[ 237.930151] CPU: 2 PID: 6568 Comm: lxc-start Not tainted
4.0.4-040004-generic #201505171336
[ 237.930188] Hardware name: System manufacturer System Product
Name/P8B WS, BIOS 0904 10/24/2011
[ 237.930225] task: 880806970a00 ti: 8808090c8000 task.ti:
8808090c8000
[ 237.930259] RIP: 0010:[8122f888] [8122f888]
pin_remove+0x58/0xf0
[ 237.930341] RSP: 0018:8808090cbe18 EFLAGS: 00010246
[ 237.930383] RAX:  RBX: 880808808a20 RCX:
dead00100100
[ 237.930429] RDX:  RSI: dead002!
00200
RDI:
81f9a548
[ 237.930474] RBP: 8808090cbe28 R08: 81d11b60 R09:
0100
[ 237.930572] R13: 880806970a00 R14: 81ecd070 R15:
7ffe57fd5540
[ 237.930618] FS: 7fd448c0() GS:88082fa8()
knlGS:
[ 237.930685] CS: 0010 DS:  ES:  CR0: 80050033
[ 237.930728] CR2:  CR3: 0008099c1000 CR4:
000407e0
[ 237.930773] Stack:
[ 237.930809] 880806970a00 880808808a20 8808090cbe48
8121d0f2
[ 237.930957] 8808090cbe68 880808808a20 8808090cbea8
8122fa55
[ 237.931123]  880806970a00 810bb2b0
8808090cbe70
[ 237.931286] Call Trace:
[ 237.931336] [8121d0f2] drop_mountpoint+0x22/0x40
[ 237.931380] [8122fa55] pin_kill+0x75/0x130
[ 237.931425]
[810bb2b0] ? prepare_to_wait_event+0x100/0x100
[ 237.931471] [8122fb39] mnt_pin_kill+0x29/0x40
[ 237.931530] [8121baf0] cleanup_mnt+0x80/0x90
[ 237.931573] [8121bb52] __cleanup_mnt+0x12/0x20
[ 237.931617] [81096ad7] task_work_run+0xb7/0xf0
[ 237.931662] [8101607c] do_notify_resume+0xbc/0xd0
[ 237.931709] [817f0beb] int_signal+0x12/0x17


 --
 Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
gesendet.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users

[lxc-users] mesh networking for lxc containers (similar to weave)?

2015-06-19 Thread Tomasz Chmielewski
Are there any solutions which would let one build mesh networking for 
lxc containers, similar to what weave does for docker?


Assumptions:

- multiple servers (hosts) which are not in the same subnet (i.e. in 
different DCs in different countries),
- containers share the same subnet (i.e. 10.0.0.0/8), no matter on which 
host they are running
- if container is migrated to a different host, it is still reachable on 
the same IP address without any changes in the networking



I suppose the solution would run only once on each of the hosts, rather 
than in each container.


Is there something similar for lxc?

--
Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] creating device nodes in unprivileged containers?

2015-07-01 Thread Tomasz Chmielewski
Really not possible? How do people run debootstrap, pbuilder? These 
tools are often parts of build systems, am I really the first one to try 
to run them in LXC?



Tomasz Chmielewski
http://wpkg.org


On 2015-07-01 17:22, Janjaap Bos wrote:

You cannot create devices from the container. You need to create them
beforehand outside rootfs and bind mount them in the container config.


This has been explained in detail on this list, so just do quick
search for further info.

This only concerns lxd deployments as far as I know.
Op 1 jul. 2015 10:08 schreef Tomasz Chmielewski man...@wpkg.org:


In an unprivileged Ubuntu 14.04 container, I'm trying to run a
program which needs to create device nodes.

Unfortunately it fails:

# pbuilder-⁠dist trusty i386 create
W: /⁠root/⁠.pbuilderrc does not exist
I: Logging to
/⁠root/⁠pbuilder/⁠trusty-⁠i386_result/⁠last_operation.log
I: Distribution is trusty.
I: Current time: Wed Jul 1 07:25:49 UTC 2015
I: pbuilder-⁠time-⁠stamp: 1435735549
I: Building the build environment
I: running debootstrap
/⁠usr/⁠sbin/⁠debootstrap
mknod: '/var/cache/pbuilder/build/5377/./test-dev-null': Operation
not permitted
E: Cannot install into target '/var/cache/pbuilder/build/5377/.'
mounted with noexec or nodev
E: debootstrap failed
W: Aborting with an error
I: cleaning the build env
I: removing directory /var/cache/pbuilder/build//5377 and its
subdirectories

So I've tried to add the following to container's config:

lxc.cap.keep = CAP_MKNOD

However, the container fails to start:

lxc-start 1435737618.188 ERROR lxc_conf - conf.c:lxc_setup:3925
- Simultaneously requested dropping and keeping caps

I don't see mknod dropped before in included configs:

# grep -ri mknod /usr/share/lxc/config/*

How can I let create custom device nodes?

The host is running these versions:

# dpkg -l|grep lxc
ii liblxc1
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools (library)
ii lxc
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools
ii lxc-templates
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools (templates)
ii lxcfs
0.9-0ubuntu1~ubuntu14.04.1~ppa1 amd64 FUSE based
filesystem for LXC
ii python3-lxc
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools (Python 3.x bindings)

--
Tomasz Chmielewski
http://wpkg.org [1]

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users [2]


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc remote add - password query?

2015-08-05 Thread Tomasz Chmielewski

Trying to add a remote server:

# lxc remote add server02 https://server02:8443
Admin password for server02:


What is the remote password, and where do I set it? man lxc is not too 
helpful here.



--
Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc remote add - password query?

2015-08-05 Thread Tomasz Chmielewski

On 2015-08-05 17:57, Tomasz Chmielewski wrote:

Trying to add a remote server:

# lxc remote add server02 https://server02:8443
Admin password for server02:


What is the remote password, and where do I set it? man lxc is not
too helpful here.


Sorry for the noise - found it:

# lxc config set core.trust_password SECRET


--
Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?

2015-10-26 Thread Tomasz Chmielewski

Thanks, it worked.

How do I set other "lxc-style" values in lxd, like for example:

lxc.network.ipv4 = 10.0.12.2/24
lxc.network.ipv4.gateway = 10.0.12.1
lxc.network.ipv6 = :::::55
lxc.network.ipv6.gateway = :2345:6789:::2


Same "lxc config set containername", i.e.:

lxc config set x1 raw.lxc "lxc.network.ipv4 = 10.0.12.2/24"
lxc config set x1 raw.lxc "lxc.network.ipv4.gateway = 10.0.12.1"
lxc config set x1 raw.lxc "lxc.network.ipv6 = :::::55"
lxc config set x1 raw.lxc "lxc.network.ipv6.gateway = 
:2345:6789:::2"



Or is there some other, more recommended way?

Tomasz


On 2015-10-27 02:35, Serge Hallyn wrote:

That's an ideal use for 'lxc.raw'.

lxc config set x1 raw.lxc "lxc.aa_allow_incomplete=1"

The lxc configuration for lxd containers is auto-generated on each 
container

start, as is the apparmor policy.  The contents of the 'lxc.raw' config
item are appended to the auto-generated config.

Quoting Tomasz Chmielewski (man...@wpkg.org):

I get the following when starting a container with lxd:

 Incomplete AppArmor support in your kernel
 If you really want to start this container, set
 lxc.aa_allow_incomplete = 1
 in your container configuration file


Where exactly do I set this with lxd? I don't really see a "config"
file, like with lxc. Is it "metadata.yaml"? If so - how to set it
there?


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?

2015-10-27 Thread Tomasz Chmielewski

On 2015-10-27 23:36, Serge Hallyn wrote:

Quoting Tomasz Chmielewski (man...@wpkg.org):

Thanks, it worked.

How do I set other "lxc-style" values in lxd, like for example:

lxc.network.ipv4 = 10.0.12.2/24
lxc.network.ipv4.gateway = 10.0.12.1
lxc.network.ipv6 = :::::55
lxc.network.ipv6.gateway = :2345:6789:::2


You need to set a single lxc.raw to the whole multi-line value.


Hmm, what do you mean by that? Can you give an example?


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?

2015-10-27 Thread Tomasz Chmielewski

On 2015-10-27 23:54, Serge Hallyn wrote:

(...)


But it doesn't matter if it's bridged or routed - all I want to do is:

- to set static IPv4 and IPv6 addresses, without doing so in the
container (works with lxc),

- be sure lxd does not hang if I supply something incompatible in CLI 
:)


Yeah, that one is bad!  Can you open an issue for that?


Added:

https://github.com/lxc/lxd/issues/1246


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?

2015-10-27 Thread Tomasz Chmielewski

Interesting - this doesn't really work and hangs lxd:

1) first try:

root@srv7 ~ # lxc config set testct raw.lxc 
"lxc.network.ipv4=10.0.3.228/24"

error: problem applying raw.lxc, perhaps there is a syntax error?
root@srv7 ~ #

2) second try - it never returns:

root@srv7 ~ # lxc config set testct raw.lxc 
"lxc.network.ipv4=10.0.3.228/24"

(hangs here, no prompt)


3) in a different shell - also hangs and never returns:

root@srv7 ~ # lxc list


4) this also hangs and never returns:

root@srv7 ~ # service lxd stop



In the log, I can see:

lxc 1445956132.156 ERRORlxc_confile - 
confile.c:network_netdev:544 - network is not created for 
'lxc.network.ipv4' = '10.0.3.228/.24' option
lxc 1445956132.156 ERRORlxc_parse - 
parse.c:lxc_file_for_each_line:57 - Failed to parse config: 
lxc.network.ipv4=10.0.3.228/.24



Tomasz




On 2015-10-27 10:02, Tomasz Chmielewski wrote:

Thanks, it worked.

How do I set other "lxc-style" values in lxd, like for example:

lxc.network.ipv4 = 10.0.12.2/24
lxc.network.ipv4.gateway = 10.0.12.1
lxc.network.ipv6 = :::::55
lxc.network.ipv6.gateway = :2345:6789:::2


Same "lxc config set containername", i.e.:

lxc config set x1 raw.lxc "lxc.network.ipv4 = 10.0.12.2/24"
lxc config set x1 raw.lxc "lxc.network.ipv4.gateway = 10.0.12.1"
lxc config set x1 raw.lxc "lxc.network.ipv6 = :::::55"
lxc config set x1 raw.lxc "lxc.network.ipv6.gateway = 
:2345:6789:::2"



Or is there some other, more recommended way?

Tomasz


On 2015-10-27 02:35, Serge Hallyn wrote:

That's an ideal use for 'lxc.raw'.

lxc config set x1 raw.lxc "lxc.aa_allow_incomplete=1"

The lxc configuration for lxd containers is auto-generated on each 
container
start, as is the apparmor policy.  The contents of the 'lxc.raw' 
config

item are appended to the auto-generated config.

Quoting Tomasz Chmielewski (man...@wpkg.org):

I get the following when starting a container with lxd:

 Incomplete AppArmor support in your kernel
 If you really want to start this container, set
 lxc.aa_allow_incomplete = 1
 in your container configuration file


Where exactly do I set this with lxd? I don't really see a "config"
file, like with lxc. Is it "metadata.yaml"? If so - how to set it
there?


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?

2015-10-26 Thread Tomasz Chmielewski

I get the following when starting a container with lxd:

 Incomplete AppArmor support in your kernel
 If you really want to start this container, set
 lxc.aa_allow_incomplete = 1
 in your container configuration file


Where exactly do I set this with lxd? I don't really see a "config" 
file, like with lxc. Is it "metadata.yaml"? If so - how to set it there?



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] iptables-save not working in unprivileged containers?

2015-11-09 Thread Tomasz Chmielewski
For some, reason, iptables-save does not seem to be working in 
unprivileged containers.


To reproduce:

- this adds a sample iptables rule:

# iptables -A INPUT -p tcp --dport 22 -j ACCEPT


- this lists the rule:

# iptables -L -v -n
Chain INPUT (policy ACCEPT 13166 packets, 5194K bytes)
 pkts bytes target prot opt in out source   
destination
0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
0.0.0.0/0tcp dpt:22


Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source   
destination


Chain OUTPUT (policy ACCEPT 12620 packets, 656K bytes)
 pkts bytes target prot opt in out source   
destination



- this is supposed to dump iptables rules to stdout - but it doesn't:

# iptables-save
#


Any idea how to make "iptables-save" working in unprivileged lxc 
containers?



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] iptables-save not working in unprivileged containers?

2015-11-09 Thread Tomasz Chmielewski

On 2015-11-10 01:22, Fiedler Roman wrote:


# iptables -A INPUT -p tcp --dport 22 -j ACCEPT


Yes, also here.

Compare

iptables-save

with

iptables-save -t filter

Later should work. I think, that some special tables cannot be read in 
unpiv

(mangle perhaps).


It seems to behave just like "iptables-save" executed by non-root user 
(in non-container).



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [BUG] lxc-destroy destroying wrong containers

2015-11-10 Thread Tomasz Chmielewski

On 2015-11-10 20:29, Christian Brauner wrote:

This may not have something to do with lxc-destroy but with how clones 
work. Can

you only proceed up to step 2) you listed:

> 2) clone it - but before the command returns, press ctrl+c 
(say, you

> realized you used a wrong name and want to interrupt):
>
> # lxc-clone -B dir testvm012d testvm13d
> [ctrl+c]

and immediately afterwards check whether the rootfs of the original 
container

testvm012d is still present?


Step 4 shows the original container is still intact:

# du -⁠sh testvm012d testvm13d
462Mtestvm012d
11M testvm13d


So it must be lxc-destroy.


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [BUG] lxc-destroy destroying wrong containers

2015-11-10 Thread Tomasz Chmielewski

On 2015-11-11 07:28, Serge Hallyn wrote:

Hi,

as I think was mentioned elsewhere I suspect this is a bug in the clone 
code.
Could you open a github issue at github.com/lxc/lxc/issues and assign 
it to

me?


Added:

https://github.com/lxc/lxc/issues/694


--
Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [BUG] lxc-destroy destroying wrong containers

2015-11-10 Thread Tomasz Chmielewski

On 2015-11-10 22:47, Christian Brauner wrote:

Yes, it is lxc-destroy but lxc-destroy does it exactly what it is 
expected to
do. The cause is the incomplete clone: When you clone a container 
config of the
original container gets copied. After the clone (copying the storage 
etc.)
succeeds the config is updated. That means before the config is updated 
the
config of your clone still contains the rootfs path to the original 
container.

You can verify this by doing:

# lxc-clone -B dir testvm012d testvm13d
[ctrl+c]

and checking

YOUR-FAVOURITE editor testvm13d/config

it should still contain

lxc.rootfs = /path/to/testvm012d/rootfs

in contrast to when the copy of the rootfs of the original container 
succeeds.

Then it will contain:

lxc.rootfs = /path/to/testvm13d/rootfs

(lxc-devel might be a good place to determine whether this is a bug or 
not.)


Looks like lxc-clone should copy the config file at the very end, after 
rootfs.



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc snapshot ... --stateful - "read-only file system"

2016-01-09 Thread Tomasz Chmielewski

I'm trying to do a stateful snapshot - unfortunately, it fails:

# lxc snapshot odoo08 "test"  --stateful
error: mkdir /var/lib/lxd/snapshots/odoo08/test/state: read-only file 
system



Does anyone know why? The following log is created:

lxc 1452328231.622 INFO lxc_confile - 
confile.c:config_idmap:1437 - read uid map: type u nsid 0 hostid 10 
range 65536
lxc 1452328231.622 INFO lxc_confile - 
confile.c:config_idmap:1437 - read uid map: type g nsid 0 hostid 10 
range 65536
lxc 1452328231.626 WARN lxc_cgmanager - 
cgmanager.c:cgm_get:994 - do_cgm_get exited with error



Running:

ii  liblxc1 
1.1.5-0ubuntu3~ubuntu14.04.1~ppa1   amd64Linux Containers 
userspace tools (library)
ii  lxc 
1.1.5-0ubuntu3~ubuntu14.04.1~ppa1   amd64Linux Containers 
userspace tools
ii  lxc-templates   
1.1.5-0ubuntu3~ubuntu14.04.1~ppa1   amd64Linux Containers 
userspace tools (templates)
ii  lxcfs   0.15-0ubuntu2~ubuntu14.04.1~ppa1 
   amd64FUSE based filesystem for LXC
ii  lxd 0.26-0ubuntu3~ubuntu14.04.1~ppa1 
   amd64Container hypervisor based on LXC - daemon
ii  lxd-client  0.26-0ubuntu3~ubuntu14.04.1~ppa1 
   amd64Container hypervisor based on LXC - client
ii  python3-lxc 
1.1.5-0ubuntu3~ubuntu14.04.1~ppa1   amd64Linux Containers 
userspace tools (Python 3.x bindings)



# uname -a
Linux srv7 4.3.3-040303-generic #201512150130 SMP Tue Dec 15 06:32:30 
UTC 2015 x86_64 x86_64 x86_64 GNU/Linux



Tomasz Chmielewski
http://wpkg.org/
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc exec / list: x509: certificate has expired or is not yet valid

2016-06-02 Thread Tomasz Chmielewski

Not sure what's the procedure for this one:

# lxc list
error: Get https://10.0.0.1:8443/1.0/containers?recursion=1: x509: 
certificate has expired or is not yet valid


?


Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc exec / list: x509: certificate has expired or is not yet valid

2016-06-02 Thread Tomasz Chmielewski

On 2016-06-02 22:40, Andrey Repin wrote:


So... what is the correct procedure to update the certificate on LXD
server and make sure it's still accepted by LXD clients?


I would go a long route and set up my own CA.
Though, I actually did that already...

Alternative is to make yourself a certificate though third-party CA, 
like

Let's Encrypt.


Well, it seems that LXD is fine with self-signed certificates as well. 
Which is OK with me.


However, changing a cert with LXD is painful:

- needs new server.crt/server.key in /var/lib/lxd, and lxd restart? 
force-reload?


- if any client connected to IP address (and not to domain name), 
certificate needs to have them as SAN (subject alternative names)


- there is no "lxd remote" command to accept a new certificate from the 
server - so LXD clients have to go through the painful "set up a 
different default remote (or, set it to local), remove the remote with 
expired certificate, add the remote with the new certificate, set it as 
a new default etc.


- LXD / lxc command does not alert that the cert is about to expire, so 
the user finds out when it's too late and the system stops working 
correctly (think automated starting / removal of containers etc.)


- could not find anything about changing the cert in LXD docs, so it was 
a bit of a problem working out why it doesn't work anymore and how to 
fix it



The whole process could be designed a bit better :)


Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc exec / list: x509: certificate has expired or is not yet valid

2016-06-02 Thread Tomasz Chmielewski

On 2016-06-02 21:09, Tomasz Chmielewski wrote:

Not sure what's the procedure for this one:

# lxc list
error: Get https://10.0.0.1:8443/1.0/containers?recursion=1: x509:
certificate has expired or is not yet valid


Apparently LXD sets up a certificate with 1 year validity when 
installed, but provides no mechanism to automatically update it. And can 
be a big surprise after a year :|


Also, don't see the CSR file there?

So... what is the correct procedure to update the certificate on LXD 
server and make sure it's still accepted by LXD clients?



# ls /var/lib/lxd/server.* -l
-rw-r--r-- 1 root root 1834 Jun  3  2015 /var/lib/lxd/server.crt
-rw--- 1 root root 3247 Jun  3  2015 /var/lib/lxd/server.key


# openssl x509 -text -noout -in server.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
34:f0:eb:8c:3f:76:f0:db:21:01:5d:34:1c:cd:f0:5c
Signature Algorithm: sha256WithRSAEncryption
Issuer: O=linuxcontainer.org
Validity
Not Before: Jun  3 06:33:15 2015 GMT
Not After : Jun  2 06:33:15 2016 GMT
Subject: O=linuxcontainer.org
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (4096 bit)
(...)


Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc move fails

2016-06-01 Thread Tomasz Chmielewski

On 2016-06-02 00:29, Tycho Andersen wrote:


Yes, an issue report would be appreciated. Can you include the full
log?


Reported here:

https://bugs.launchpad.net/ubuntu/+source/criu/+bug/1588133

If you need more info, let me know.



Looks like your container is using some program with some sort of
funny ELF headers that criu doesn't quite understand.


This is pretty much standard Ubuntu 16.04. The only running binary which 
is out of repositories is nginx (from upstream nginx repo).



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc move fails

2016-06-01 Thread Tomasz Chmielewski
Trying to move a running container between two Ubuntu 16.04 servers with 
the latest updates installed:


# lxc move local-container remote:
error: Error transferring container data: checkpoint failed:
(00.316028) Error (pie-util-vdso.c:155): vdso: Not all dynamic entries 
are present

(00.316211) Error (cr-dump.c:1600): Dumping FAILED.


Expected?


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] what sets http_proxy?

2016-06-01 Thread Tomasz Chmielewski

I've moved an offline container from one server to another.

Started it there, and trying to use curl:

# curl www.example.com
curl: (5) Could not resolve proxy: fe80::1%eth0]

It's because this variable is set:

# echo $http_proxy
http://[fe80::1%eth0]:13128


What sets this and why?

I didn't set it on the host (on the host http_proxy is empty) nor in the 
container.



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc move fails

2016-06-01 Thread Tomasz Chmielewski

On 2016-06-01 23:24, Stéphane Graber wrote:


Expected?


I don't believe I've seen that one before. Maybe Tycho has.

Live migration is considered an experimental feature of LXD right now,
mostly because CRIU still needs quite a bit of work to serialize all
useful kernel structures.

You may want to follow the "Sending bug reports" section of my post 
here

https://www.stgraber.org/2016/04/25/lxd-2-0-live-migration-912/

That way we should have all the needed data to look into this.


FYI I'm using Linux 4.6.0 (from 
http://kernel.ubuntu.com/~kernel-ppa/mainline/) on both servers, as all 
earlier kernels are not stable with btrfs.


Could be it's not compatible?

Do you still want me to report the issue?


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] can't remove snapshots with certain names

2016-01-10 Thread Tomasz Chmielewski

I'm not able to remove snapshots with certain names.

This works as expected:

# lxc snapshot odoo10 "2016-01-10"
# lxc delete odoo10/2016-01-10


This also works as expected:

# lxc snapshot odoo10 "2016-01-10 test number 1"
# lxc delete odoo10/2016-01-10\ test\ number\ 1

# lxc snapshot odoo10 "2016-01-10 test number 2"
# lxc delete odoo10/"2016-01-10 test number 2"

# lxc snapshot odoo10 "2016-01-10 test number 3"
# lxc delete "odoo10/2016-01-10 test number 3"


This doesn't seem to work (with ":" in snapshot name):

# lxc snapshot odoo10 "2016-01-10 23:26"
# lxc delete "odoo10/2016-01-10 23:26"
error: unknown remote name: "odoo10/2016-01-10 23"
# lxc delete "odoo10/2016-01-10 23\:26"
error: unknown remote name: "odoo10/2016-01-10 23\\"


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc exec - is there a way to run command in the background?

2016-01-28 Thread Tomasz Chmielewski
Is there a way to run the command in the background, when running "lxc 
exec"?


It doesn't seem to work for me.

# lxc exec container -- sleep 2h &
[2] 13566
#

[2]+  Stopped lxc exec container -- sleep 2h


This also doesn't work:

# lxc exec container -- "sleep 2h &"
#


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxd-client: frequent "Unable to connect to"

2016-01-26 Thread Tomasz Chmielewski

I'm using lxd-client in a lxd container.

Unfortunately, very frequently I'm getting "Unable to connect to" (after 
several seconds of "hanging"), for example:


# lxc list
error: Get https://10.0.3.1:8443/1.0/containers?recursion=1: Unable to 
connect to: 10.0.3.1:8443


I've checked with tcpdump on the host, and the packets are going both 
ways.


At the same time when it happens (lxc list hangs, or any lxc commands 
hang), "curl -k -v https://10.0.3.1:8443; just works - i.e. returns:


{"type":"sync","status":"Success","status_code":200,"metadata":["/1.0"],"operation":""}



Do you have any ideas to debug this?

The same executed directly on the host always work fine.


Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] "lxc file push" corrupts files

2016-01-26 Thread Tomasz Chmielewski

In some cases, "lxc file push" corrupts files.

To reproduce:

- file must exist in the container
- existing file in the container must be bigger than the file being 
pushed


Reproducer:

* make sure /tmp/testfile does not exist in the container:

host# lxc exec container rm /tmp/testfile


* create two files locally:

host# echo a > smaller

host# echo ab > bigger

host# md5sum smaller bigger
60b725f10c9c85c70d97880dfe8191b3  smaller
daa8075d6ac5ff8d0c6d4650adb4ef29  bigger

host# ls -l smaller bigger
-rw-r--r-- 1 root root 3 Jan 27 07:35 bigger
-rw-r--r-- 1 root root 2 Jan 27 07:35 smaller


* send the bigger file:

host# lxc file push bigger container/tmp/testfile


* check if it's OK - it is:

container# md5sum testfile
daa8075d6ac5ff8d0c6d4650adb4ef29  testfile

container# ls -l testfile
-rw-r--r-- 1 root root 3 Jan 27 07:37 testfile


* now, send the smaller file to overwrite "testfile"

host# lxc file push smaller container/tmp/testfile


* it is neither of the files we've sent:

container# md5sum testfile
94364860a0452ac23f3dac45f0091d81  testfile

container# ls -l testfile
-rw-r--r-- 1 root root 3 Jan 27 07:38 testfile

container# cat testfile
a
 <-- extra end of line
 <-- extra end of line
container#


Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] no lxc nor lxd containers start after recent update

2016-02-25 Thread Tomasz Chmielewski

Doing the following seems to help:

# service lxcfs stop
# service lxcfs start


Then, I'm able to manually start lxc and lxd containers.


Tomasz



On 2016-02-26 09:04, Tomasz Chmielewski wrote:
None of my lxc nor lxd container start after the most recent lxc/lxd 
update.


Any clues?

# lxc-start -n td-backupslave
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the
container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be
obtained by setting the --logfile and --logpriority options.


# cat /var/log/lxc/td-backupslave.log
  lxc-start 20160226000408.599 ERRORlxc_utils -
utils.c:open_without_symlink:1625 - No such file or directory - Error
examining efi in
/usr/lib/x86_64-linux-gnu/lxc/sys/firmware/efi/efivars
  lxc-start 20160226000408.599 ERRORlxc_cgfs -
cgfs.c:cgroupfs_mount_cgroup:1504 - Permission denied - error
bind-mounting /run/lxcfs/controllers/name=systemd/lxc/td-backupslave
to
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/systemd/lxc/td-backupslave
  lxc-start 20160226000408.599 ERRORlxc_conf -
conf.c:lxc_mount_auto_mounts:780 - Permission denied - error mounting
/sys/fs/cgroup
  lxc-start 20160226000408.599 ERRORlxc_conf -
conf.c:lxc_setup:3742 - failed to setup the automatic mounts for
'td-backupslave'
  lxc-start 20160226000408.599 ERRORlxc_start -
start.c:do_start:783 - failed to setup the container
  lxc-start 20160226000408.599 ERRORlxc_sync -
sync.c:__sync_wait:51 - invalid sequence number 1. expected 2
  lxc-start 20160226000408.599 ERRORlxc_start -
start.c:__lxc_start:1274 - failed to spawn 'td-backupslave'
  lxc-start 20160226000414.138 ERRORlxc_start_ui -
lxc_start.c:main:344 - The container failed to start.
  lxc-start 20160226000414.138 ERRORlxc_start_ui -
lxc_start.c:main:346 - To get more details, run the container in
foreground mode.
  lxc-start 20160226000414.138 ERRORlxc_start_ui -
lxc_start.c:main:348 - Additional information can be obtained by
setting the --logfile and --logpriority options.


# dpkg -l|grep lxc
ii  liblxc1
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers
userspace tools (library)
ii  lxc
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   all  Transitional
package for lxc1
ii  lxc-common
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers
userspace tools (common tools)
ii  lxc-templates
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers
userspace tools (templates)
ii  lxc1
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers
userspace tools
ii  lxcfs
2.0.0~rc2-0ubuntu1~ubuntu14.04.1~ppa1   amd64FUSE based
filesystem for LXC
ii  python3-lxc
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers
userspace tools (Python 3.x bindings)

# dpkg -l|grep lxd
ii  lxd
2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - daemon
ii  lxd-client
2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - client
ii  lxd-tools
2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - extra tools


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] no lxc nor lxd containers start after recent update

2016-02-25 Thread Tomasz Chmielewski
Also, with the "service lxcfs stop/start fix", console in lxd containers 
(in lxc, all fine) looks botched - after pressing enter, getting as 
below.


I guess it wasn't the best of updates, at least for me.


# lxc exec 
z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55 /bin/bash

root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#
   
root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#
 
 
root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#
 
root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#
 
   
root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#
 
 
 
root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#
  
root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#
 

root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#

root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#
 
  root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#
 
 
root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#
 
root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~#





Tomasz


On 2016-02-26 09:16, Tomasz Chmielewski wrote:

Doing the following seems to help:

# service lxcfs stop
# service lxcfs start


Then, I'm able to manually start lxc and lxd containers.


Tomasz



On 2016-02-26 09:04, Tomasz Chmielewski wrote:
None of my lxc nor lxd container start after the most recent lxc/lxd 
update.


Any clues?

# lxc-start -n td-backupslave
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the
container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be
obtained by setting the --logfile and --logpriority options.


# cat /var/log/lxc/td-backupslave.log
  lxc-start 20160226000408.599 ERRORlxc_utils -
utils.c:open_without_symlink:1625 - No such file or directory - Error
examining efi in
/usr/lib/x86_64-linux-gnu/lxc/sys/firmware/efi/efivars
  lxc-start 20160226000408.599 ERRORlxc_cgfs -
cgfs.c:cgroupfs_mount_cgroup:1504 - Permission denied - error
bind-mounting /run/lxcfs/controllers/name=systemd/lxc/td-backupslave
to
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/systemd/lxc/td-backupslave
  lxc-start 20160226000408.599 ERRORlxc_conf -
conf.c:lxc_mount_auto_mounts:780 - Permission denied - error mounting
/sys/fs/cgroup
  lxc-start 20160226000408.599 ERRORlxc_conf -
conf.c:lxc_setup:3742 - failed to setup the automatic mounts for
'td-backupslave'
  lxc-start 20160226000408.599 ERRORlxc_start -
start.c:do_start:783 - failed to setup the container
  lxc-start 20160226000408.599 ERRORlxc_sync -
sync.c:__sync_wait:51 - invalid sequence number 1. expected 2
  lxc-start 20160226000408.599 ERRORlxc_start -
start.c:__lxc_start:1274 - failed to spawn 'td-backupslave'
  lxc-start 20160226000414.138 ERRORlxc_start_ui -
lxc_start.c:main:344 - The container failed to start.
  lxc-start 20160226000414.138 ERRORlxc_start_ui -
lxc_start.c:main:346 - To get more details, run the container in
foreground mode.
  lxc-start 20160226000414.138 ERRORlxc_start_ui -
lxc_start.c:main:348 - Additional information can be obtained by
setting the --logfile and --logpriority options.


# dpkg -l|grep lxc
ii  liblxc1
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers
userspace tools (library)
ii  lxc
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   all  Transitional
package for lxc1
ii  lxc-common
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers
userspace tools (common tools)
ii  lxc-templates
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers
userspace tools (templates)
ii  lxc1
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers
userspace tools
ii  lxcfs
2.0.0~rc2-0ubuntu1~ubuntu14.04.1~ppa1   amd64FUSE based
filesystem for LXC
ii  python3-lxc
2.0.0~rc3-0ubuntu1~ubuntu14

[lxc-users] no lxc nor lxd containers start after recent update

2016-02-25 Thread Tomasz Chmielewski
None of my lxc nor lxd container start after the most recent lxc/lxd 
update.


Any clues?

# lxc-start -n td-backupslave
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container 
in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained 
by setting the --logfile and --logpriority options.



# cat /var/log/lxc/td-backupslave.log
  lxc-start 20160226000408.599 ERRORlxc_utils - 
utils.c:open_without_symlink:1625 - No such file or directory - Error 
examining efi in /usr/lib/x86_64-linux-gnu/lxc/sys/firmware/efi/efivars
  lxc-start 20160226000408.599 ERRORlxc_cgfs - 
cgfs.c:cgroupfs_mount_cgroup:1504 - Permission denied - error 
bind-mounting /run/lxcfs/controllers/name=systemd/lxc/td-backupslave to 
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/systemd/lxc/td-backupslave
  lxc-start 20160226000408.599 ERRORlxc_conf - 
conf.c:lxc_mount_auto_mounts:780 - Permission denied - error mounting 
/sys/fs/cgroup
  lxc-start 20160226000408.599 ERRORlxc_conf - 
conf.c:lxc_setup:3742 - failed to setup the automatic mounts for 
'td-backupslave'
  lxc-start 20160226000408.599 ERRORlxc_start - 
start.c:do_start:783 - failed to setup the container
  lxc-start 20160226000408.599 ERRORlxc_sync - 
sync.c:__sync_wait:51 - invalid sequence number 1. expected 2
  lxc-start 20160226000408.599 ERRORlxc_start - 
start.c:__lxc_start:1274 - failed to spawn 'td-backupslave'
  lxc-start 20160226000414.138 ERRORlxc_start_ui - 
lxc_start.c:main:344 - The container failed to start.
  lxc-start 20160226000414.138 ERRORlxc_start_ui - 
lxc_start.c:main:346 - To get more details, run the container in 
foreground mode.
  lxc-start 20160226000414.138 ERRORlxc_start_ui - 
lxc_start.c:main:348 - Additional information can be obtained by setting 
the --logfile and --logpriority options.



# dpkg -l|grep lxc
ii  liblxc1  
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers 
userspace tools (library)
ii  lxc  
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   all  Transitional 
package for lxc1
ii  lxc-common   
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers 
userspace tools (common tools)
ii  lxc-templates
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers 
userspace tools (templates)
ii  lxc1 
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers 
userspace tools
ii  lxcfs
2.0.0~rc2-0ubuntu1~ubuntu14.04.1~ppa1   amd64FUSE based 
filesystem for LXC
ii  python3-lxc  
2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1   amd64Linux Containers 
userspace tools (Python 3.x bindings)


# dpkg -l|grep lxd
ii  lxd  
2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container 
hypervisor based on LXC - daemon
ii  lxd-client   
2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container 
hypervisor based on LXC - client
ii  lxd-tools
2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container 
hypervisor based on LXC - extra tools



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd: restore snapshot as a new container?

2016-01-25 Thread Tomasz Chmielewski

On 2016-01-20 02:04, Serge Hallyn wrote:

Quoting Tomasz Chmielewski (man...@wpkg.org):

Can lxc restore a snapshot as a new container?

Let's say I have a container named "container1" and make a snapshot
called "test1":

# lxc snapshot container1 "test1"


How would I restore it as a new container, called "container1-test"?


lxc copy container1/test1 container1-test1


If using a filesystem which allows snapshotting (btrfs) - will it copy 
container's directory (uses lots of space, takes long), or snapshot it 
(almost instant, uses almost no extra space)?



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc file "only allowed for containers that are currently running"?

2016-01-25 Thread Tomasz Chmielewski

On 2016-01-26 01:46, Stéphane Graber wrote:


So either documentation is outdated, and lxc push/pull is allowed
for containers in any state (or at least RUNNING and STOPPED) or the
functionality will be removed.
Which one is true? Being able to push/pull the files is quite
convenient.



I changed file pull/push a little while ago to work against stopped
containers too, clearly I forgot to update the documentation :)


Excellent!


A pull request would be appreciated, otherwise I'll try to remember to
fix this next time I look at the specs.


I would if I knew how!


Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc file "only allowed for containers that are currently running"?

2016-01-25 Thread Tomasz Chmielewski
According to fine manual[1], lxc file "is only allowed for containers 
that are currently running".


I've tried doing both push and pull operations on a container in STOPPED 
state, and it worked, i.e.:


lxc file pull stopped-container/etc/services .
lxc file push services stopped-container/etc/services


So either documentation is outdated, and lxc push/pull is allowed for 
containers in any state (or at least RUNNING and STOPPED) or the 
functionality will be removed.
Which one is true? Being able to push/pull the files is quite 
convenient.



I'm using:

lxd-client 0.27-0ubuntu2~ubuntu14.04.1~ppa1 amd64


[1] 
https://github.com/lxc/lxd/blob/master/specs/command-line-user-experience.md#file




Tomasz Chmielewski
http://wpkg.org/
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd: restore snapshot as a new container?

2016-01-25 Thread Tomasz Chmielewski

On 2016-01-25 22:19, Tomasz Chmielewski wrote:


Let's say I have a container named "container1" and make a snapshot
called "test1":

# lxc snapshot container1 "test1"


How would I restore it as a new container, called "container1-test"?


lxc copy container1/test1 container1-test1


If using a filesystem which allows snapshotting (btrfs) - will it copy
container's directory (uses lots of space, takes long), or snapshot it
(almost instant, uses almost no extra space)?


It seems to be doing a proper snapshot - good :)

Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] "termination protection"?

2016-01-25 Thread Tomasz Chmielewski

Is there a way to protect the containers against accidental termination?

For example:

# lxc list
| container-2016-01-25-17-20-11 | RUNNING | 10.190.0.50 (eth0) 
(...)


# lxc delete container-2016-01-25-17-20-11

No longer there!


Some kind of "lxc config set containername allowdelete=0" would be very 
useful:


- "s" is next to "d" on the keyboard, so it's easy to delete the 
container with:


lxc d-press-tab containername

- it would feel safer to protect important containers this way

- probably "lxc config set containername allowdelete=0" should not 
protect snapshots, if named explicitely, i.e. "lxc delete 
containername/snapshot"



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxd - autostart unreliable on busy servers

2016-01-24 Thread Tomasz Chmielewski
When I restart a busy server (running several containers, creating 100% 
IO load for about 10 mins after start), my lxd containers do not 
autostart reliably.


If I start them manually later on, they start fine (although "lxc start 
containername" needs a while to return).



Is there a way to make lxd autostart more reliable? Perhaps it's some 
kind of timeout which needs to be increased somewhere?




In the log, I can see:

lxc 1453630080.796 ERRORlxc_cgmanager - 
cgmanager.c:cgm_dbus_connect:176 - Error cgroup manager api version: Did 
not receive a reply. Possible causes include: the remote application did 
not send a reply, the message bus security policy blocked the reply, the 
reply timeout expired, or the network connection was broken.
lxc 1453630080.796 ERRORlxc_cgmanager - 
cgmanager.c:do_cgm_get:872 - Error connecting to cgroup manager
lxc 1453630080.797 WARN lxc_cgmanager - 
cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 1453630080.799 DEBUGlxc_cgmanager - 
cgmanager.c:cgm_dbus_connect:152 - Failed opening dbus connection: 
org.freedesktop.DBus.Error.NoServer: Failed to connect to socket 
/sys/fs/cgroup/cgmanager/sock: Connection refused
lxc 1453630080.799 ERRORlxc_cgmanager - 
cgmanager.c:do_cgm_get:872 - Error connecting to cgroup manager
lxc 1453630080.799 WARN lxc_cgmanager - 
cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 1453630081.096 DEBUGlxc_cgmanager - 
cgmanager.c:cgm_dbus_connect:152 - Failed opening dbus connection: 
org.freedesktop.DBus.Error.NoServer: Failed to connect to socket 
/sys/fs/cgroup/cgmanager/sock: Connection refused
lxc 1453630081.097 ERRORlxc_cgmanager - 
cgmanager.c:do_cgm_get:872 - Error connecting to cgroup manager
lxc 1453630081.097 WARN lxc_cgmanager - 
cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 1453630085.958 INFO lxc_confile - 
confile.c:config_idmap:1437 - read uid map: type u nsid 0 hostid 10 
range 65536
lxc 1453630085.958 INFO lxc_confile - 
confile.c:config_idmap:1437 - read uid map: type g nsid 0 hostid 10 
range 65536
lxc 1453630085.960 DEBUGlxc_cgmanager - 
cgmanager.c:cgm_dbus_connect:152 - Failed opening dbus connection: 
org.freedesktop.DBus.Error.NoServer: Failed to connect to socket 
/sys/fs/cgroup/cgmanager/sock: Connection refused
lxc 1453630085.960 ERRORlxc_cgmanager - 
cgmanager.c:do_cgm_get:872 - Error connecting to cgroup manager
lxc 1453630085.961 WARN lxc_cgmanager - 
cgmanager.c:cgm_get:989 - do_cgm_get exited with error





Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxd: restore snapshot as a new container?

2016-01-19 Thread Tomasz Chmielewski

Can lxc restore a snapshot as a new container?

Let's say I have a container named "container1" and make a snapshot 
called "test1":


# lxc snapshot container1 "test1"


How would I restore it as a new container, called "container1-test"?


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc stop / lxc reboot stopped working

2016-03-10 Thread Tomasz Chmielewski
Something like this reproduces it for me reliably (hangs on the first or 
second "stop"):



while true; do
   echo stop
   time lxc stop containername --debug
   sleep 5
   echo start
   lxc start containername
done


Tomasz


On 2016-03-11 01:35, Tomasz Chmielewski wrote:

Am I the only one affected?


Also happens with:

ii  lxd
2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - daemon
ii  lxd-client
2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - client
ii  lxd-tools
2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - extra tools


"lxc restart containername" mostly just hangs.



Tomasz


On 2016-03-09 17:53, Tomasz Chmielewski wrote:

After the latest lxd update, lxc stop / lxc reboot no longer work (and
hang instead).


# dpkg -l|grep lxd
ii  lxd
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - daemon
ii  lxd-client
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - client
ii  lxd-tools
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - extra tools



# lxc stop
z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 --debug
DBUG[03-09|08:50:05] Raw response:
{"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN
CERTIFICATE-
(...)
-END
CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}}

DBUG[03-09|08:50:05] Raw response:




{"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\

":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"name":"z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26","profiles":["default"],"stateful":false,"status":"Running","status_code":103}}

DB

Re: [lxc-users] lxc stop / lxc reboot stopped working

2016-03-10 Thread Tomasz Chmielewski

Am I the only one affected?


Also happens with:

ii  lxd  
2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor 
based on LXC - daemon
ii  lxd-client   
2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor 
based on LXC - client
ii  lxd-tools
2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor 
based on LXC - extra tools



"lxc restart containername" mostly just hangs.



Tomasz


On 2016-03-09 17:53, Tomasz Chmielewski wrote:

After the latest lxd update, lxc stop / lxc reboot no longer work (and
hang instead).


# dpkg -l|grep lxd
ii  lxd
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - daemon
ii  lxd-client
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - client
ii  lxd-tools
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - extra tools



# lxc stop
z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 --debug
DBUG[03-09|08:50:05] Raw response:
{"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN
CERTIFICATE-
(...)
-END
CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}}

DBUG[03-09|08:50:05] Raw response:


{"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\

":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"name":"z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26","profiles":["default"],"stateful":false,"status":"Running","status_code":103}}

DBUG[03-09|08:50:05] Putting
{"action":"stop","force":false,"stateful":false,"timeout":-1}
 to
http://unix.socket/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26/state
DBUG[03-09|08:50:05] Raw response: {"type":"async","status":"Operation
created","status_code":100,"metadata":{"id":"818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d","class":"task","created_at":"2016-03-09T08:50:05.465171729Z","updated_at":"2016-03-09T08:50:05.465171729Z","status":"Running","status_code":103,"resources":{"containers":["/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26"]},"metadata":null,"may_cancel":false,"err":""},"operation":"/1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d"}

DBUG[03-09|08:50:05] 
1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d/wait





Just sits and hangs here.


Is there any quick fix for that?


Other than that - do you have any system which checks basic
functionality before pushing the packages to general public? Seems we
had lots of bugs making lxd unusable lately.


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc stop / lxc reboot stopped working

2016-03-10 Thread Tomasz Chmielewski

For "lxc restart", this reproduces reliably (below).

It seems that there may be some race - if "sleep" is set to lower 
values, it seems more likely that it will fail.



# while true; do
   echo restart time lxc restart containername
   sleep 3
done


restart


real0m15.448s
user0m0.048s
sys 0m0.000s
restart

real0m11.373s
user0m0.052s
sys 0m0.004s
restart

real0m13.019s
user0m0.048s
sys 0m0.000s
restart

real0m6.023s
user0m0.040s
sys 0m0.008s
restart

real0m7.106s
user0m0.048s
sys 0m0.000s
restart

real0m5.520s
user0m0.044s
sys 0m0.004s
restart

real0m49.382s
user0m0.052s
sys 0m0.000s
restart

real0m33.426s
user0m0.048s
sys 0m0.000s
restart


...hangs here...



Tomasz


On 2016-03-11 02:23, Tomasz Chmielewski wrote:

Something like this reproduces it for me reliably (hangs on the first
or second "stop"):


while true; do
   echo stop
   time lxc stop containername --debug
   sleep 5
   echo start
   lxc start containername
done


Tomasz


On 2016-03-11 01:35, Tomasz Chmielewski wrote:

Am I the only one affected?


Also happens with:

ii  lxd
2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - daemon
ii  lxd-client
2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - client
ii  lxd-tools
2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - extra tools


"lxc restart containername" mostly just hangs.



Tomasz


On 2016-03-09 17:53, Tomasz Chmielewski wrote:
After the latest lxd update, lxc stop / lxc reboot no longer work 
(and

hang instead).


# dpkg -l|grep lxd
ii  lxd
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - daemon
ii  lxd-client
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - client
ii  lxd-tools
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - extra tools



# lxc stop
z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 
--debug

DBUG[03-09|08:50:05] Raw response:
{"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN
CERTIFICATE-
(...)
-END
CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}}

DBUG[03-09|08:50:05] Raw response:






{"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\

":true,\"Isgid\":false,\"Hostid\"

[lxc-users] lxc stop / lxc reboot stopped working

2016-03-09 Thread Tomasz Chmielewski
After the latest lxd update, lxc stop / lxc reboot no longer work (and 
hang instead).



# dpkg -l|grep lxd
ii  lxd  
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor 
based on LXC - daemon
ii  lxd-client   
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor 
based on LXC - client
ii  lxd-tools
2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor 
based on LXC - extra tools




# lxc stop 
z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 --debug
DBUG[03-09|08:50:05] Raw response: 
{"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN 
CERTIFICATE-

(...)
-END 
CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}}


DBUG[03-09|08:50:05] Raw response: 
{"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\

":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"name":"z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26","profiles":["default"],"stateful":false,"status":"Running","status_code":103}}

DBUG[03-09|08:50:05] Putting 
{"action":"stop","force":false,"stateful":false,"timeout":-1}
 to 
http://unix.socket/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26/state
DBUG[03-09|08:50:05] Raw response: {"type":"async","status":"Operation 
created","status_code":100,"metadata":{"id":"818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d","class":"task","created_at":"2016-03-09T08:50:05.465171729Z","updated_at":"2016-03-09T08:50:05.465171729Z","status":"Running","status_code":103,"resources":{"containers":["/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26"]},"metadata":null,"may_cancel":false,"err":""},"operation":"/1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d"}


DBUG[03-09|08:50:05] 
1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d/wait





Just sits and hangs here.


Is there any quick fix for that?


Other than that - do you have any system which checks basic 
functionality before pushing the packages to general public? Seems we 
had lots of bugs making lxd unusable lately.



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] recent lxd update broke lxc exec terminal size?

2016-03-29 Thread Tomasz Chmielewski
After a recent lxd update, doing "lxc exec somecontainer /bin/bash" will 
attach to given container's console, but it's size is very small, less 
than 1/4 of the screen.


It's quite uncomfortable to work with (i.e. ps auxf output is truncated, 
ncurses-based programs behave erratic).



Is it intended change?


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] zfs disk usage for published lxd images

2016-05-25 Thread Tomasz Chmielewski
I've been using btrfs quite a lot and it's great technology. There are 
some shortcomings though:


1) compression only really works with compress-force mount argument

On a system which only stores text logs (receiving remote rsyslog logs), 
I was gaining around 10% with compress=zlib mount argument - not that 
great for text files/logs. With compress-force=zlib, I'm getting over 
85% compress ratio (i.e. using just 165 GB of disk space to store 1.2 TB 
data). Maybe that's the consequence of receiving log streams, not sure 
(but, compress-force fixed bad compression ratio).



2) the first kernel where I'm not getting out-of-space issues is 4.6 
(which was released yesterday). If you're using a distribution kernel, 
you will probably be seeing out-of-space issues. Quite likely scenario 
to hit out-of-space with a kernel lower than 4.6 is to use a database 
(postgresql, mongo etc.) and to snapshot the volume. Ubuntu users can 
download kernel packages from 
http://kernel.ubuntu.com/~kernel-ppa/mainline/



3) had some really bad experiences with btrfs quotas stability in older 
kernels, and judging by the amount of changes in this area on 
linux-btrfs mailing list, I'd rather wait a few stable kernels than use 
it again



4) if you use databases - you should chattr +C database dir, otherwise, 
performance will suffer. Please remember that chattr +C does not have 
any effect on existing files, so you might need to stop your database, 
copy the files out, chattr +C the database dir, copy the files in



Other than that - works fine, snapshots are very useful.

It's hard to me to say what's "more stable" on Linux (btrfs or zfs); my 
bets would be btrfs getting more attention in the coming year, as it's 
getting its remaining bugs fixed.



Tomasz Chmielewski
http://wpkg.org




On 2016-05-16 20:20, Ron Kelley wrote:

I tried ZFS on various linux/FreeBSD builds in the past and the
performance was aweful.  It simply required too much RAM to perform
properly.  This is why I went the BTRFS route.

Maybe I should look at ZFS again on Ubuntu 16.04...



On 5/16/2016 6:59 AM, Fajar A. Nugraha wrote:
On Mon, May 16, 2016 at 5:38 PM, Ron Kelley <rkelley...@gmail.com> 
wrote:

For what's worth, I use BTRFS, and it works great.


Btrfs also works in nested lxd, so if that's your primary use I highly
recommend btrfs.

Of course, you could also keep using zfs-backed containers, but
manually assign a zvol-formatted-as-btrfs for first-level-container's
/var/lib/lxd.

 Container copies are almost instant.  I can use compression with 
minimal overhead,


zfs and btrfs are almost identical in that aspect (snapshot/clone, and
lz4 vs lzop in compression time and ratio). However, lz4 (used in zfs)
is MUCH faster at decompression compared to lzop (used in btrfs),
while lzop uses less memory.


use quotas to limit container disk space,


zfs does that too

and can schedule a deduplication task via cron to save even more 
space.


That is, indeed, only available in btrfs


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Btrfs - Disk quota and Ubuntu 15.10

2016-06-30 Thread Tomasz Chmielewski

On 2016-06-30 18:55, Sjoerd wrote:

On 30/06/2016 11:17, Tomasz Chmielewski wrote:


Please note that btrfs is not a stable filesystem, at least not in the
latest Ubuntu (16.04).

You may have "out of space" errors with them, especially when doing 
snapshots.


kernels 4.6.x[1] behave stable for me.


I am not using RAID5/6 with btrfs. Only the latter is still not
production ready as I understood it.
My amount of snapshots won't be a lot (maybe 50 max or so), since I
delete them regurly.
But I'll keep an eye on the metadata as well indeed.thanks for the 
hint..


"out of space" when doing snapshot affects kernels older than 4.6, no 
matter if you use RAID-1, RAID-5/6, or no RAID.


It's especially annoying especially when snapshotting running containers 
with postgres, mysql, mongo etc. - as this causes database errors or 
crashes.



Tomasz Chmielewski
http://wpkg.org/
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Btrfs - Disk quota and Ubuntu 15.10

2016-06-30 Thread Tomasz Chmielewski

On 2016-06-30 17:38, Sjoerd wrote:

On 29/06/2016 20:02, Benoit GEORGELIN - Association Web4all wrote:

Hi, (without hijacking another thread)

I'm sharing with you some information about BTRFS and Ubuntu 15.10,
Kernel  4.2.0-30-generic regarding a quota disk error on my LXC
containers
If you plan tu use quota, this will be interesting for you to know.


Yes I just read it on the btrfs-mailinglist ;)

Anway besides the problems you describe, using quota also brings down
btrfs send/receive speed to a crawl.
I am backing up my containers with btrfs send/receive and because of
all the quota problems described I am not using it anymore.


Please note that btrfs is not a stable filesystem, at least not in the 
latest Ubuntu (16.04).


You may have "out of space" errors with them, especially when doing 
snapshots.


kernels 4.6.x[1] behave stable for me.


[1] working with Ubuntu, from 
http://kernel.ubuntu.com/~kernel-ppa/mainline/



Tomasz Chmielewski
http://wpkg.org/
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd hang - "panic: runtime error: slice bounds out of range"

2017-01-30 Thread Tomasz Chmielewski
I think it may be related to 
https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/


I have a docker container, with several dockers inside, and with lxd 
snapshots.


Doing this:

# lxc delete docker
error: Get 
http://unix.socket/1.0/operations/7d30bf41-3af6-4b48-b42c-06fdd2bba48b/wait: 
EOF


Results in:

Jan 31 07:28:36 lxd01 lxd[8363]: panic: runtime error: slice bounds out 
of range

Jan 31 07:28:36 lxd01 lxd[8363]: goroutine 867 [running]:
Jan 31 07:28:36 lxd01 lxd[8363]: panic(0xadef00, 0xc82000e050)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/usr/lib/go-1.6/src/runtime/panic.go:481 +0x3e6
Jan 31 07:28:36 lxd01 lxd[8363]: 
main.(*storageBtrfs).getSubVolumes(0xc8200f8240, 0xc82000bb80, 0x3a, 
0x0, 0x0, 0x0, 0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:812 
+0x104f
Jan 31 07:28:36 lxd01 lxd[8363]: 
main.(*storageBtrfs).subvolsDelete(0xc8200f8240, 0xc82000bb80, 0x3a, 
0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:574 
+0x72
Jan 31 07:28:36 lxd01 lxd[8363]: 
main.(*storageBtrfs).ContainerDelete(0xc8200f8240, 0x7f5f55ba1900, 
0xc8204a0480, 0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:119 
+0xb0
Jan 31 07:28:36 lxd01 lxd[8363]: 
main.(*storageBtrfs).ContainerSnapshotDelete(0xc8200f8240, 
0x7f5f55ba1900, 0xc8204a0480, 0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:321 
+0x5c
Jan 31 07:28:36 lxd01 lxd[8363]: 
main.(*storageLogWrapper).ContainerSnapshotDelete(0xc8200fda60, 
0x7f5f55ba1900, 0xc8204a0480, 0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage.go:510 
+0x22b
Jan 31 07:28:36 lxd01 lxd[8363]: 
main.(*containerLXC).Delete(0xc8204a0480, 0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_lxc.go:2366 
+0x30e
Jan 31 07:28:36 lxd01 lxd[8363]: 
main.containerDeleteSnapshots(0xc820090a00, 0xc82017c017, 0x6, 0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/containers.go:208 
+0x4ac
Jan 31 07:28:36 lxd01 lxd[8363]: 
main.(*containerLXC).Delete(0xc8204a00c0, 0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_lxc.go:2371 
+0x696
Jan 31 07:28:36 lxd01 lxd[8363]: 
main.containerDelete.func1(0xc8200c8840, 0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_delete.go:22 
+0x3e
Jan 31 07:28:36 lxd01 lxd[8363]: 
main.(*operation).Run.func1(0xc8200c8840, 0xc82011f320)
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:110 
+0x3a

Jan 31 07:28:36 lxd01 lxd[8363]: created by main.(*operation).Run
Jan 31 07:28:36 lxd01 lxd[8363]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:137 
+0x127
Jan 31 07:28:36 lxd01 systemd[1]: lxd.service: Main process exited, 
code=exited, status=2/INVALIDARGUMENT
Jan 31 07:28:36 lxd01 systemd[1]: lxd.service: Unit entered failed 
state.
Jan 31 07:28:36 lxd01 systemd[1]: lxd.service: Failed with result 
'exit-code'.
Jan 31 07:28:36 lxd01 systemd[1]: lxd.service: Service hold-off time 
over, scheduling restart.

Jan 31 07:28:36 lxd01 systemd[1]: Stopped LXD - main daemon.
Jan 31 07:28:36 lxd01 systemd[1]: Starting LXD - main daemon...
Jan 31 07:28:36 lxd01 lxd[11969]: t=2017-01-31T07:28:36+ lvl=warn 
msg="CGroup memory swap accounting is disabled, swap limits will be 
ignored."

Jan 31 07:28:37 lxd01 systemd[1]: Started LXD - main daemon.




Tomasz

On 2017-01-31 16:17, Tomasz Chmielewski wrote:

lxd process on one of my servers started to hang a few days ago.

In syslog, I can see the following being repeated (log below).

I see a similar issue here:

https://github.com/lxc/lxd/issues/2089

but it's closed.

Running:

ii  lxd  2.0.8-0ubuntu1~ubuntu16.04.2
  amd64Container hypervisor based on LXC - daemon
ii  lxd-client   2.0.8-0ubuntu1~ubuntu16.04.2
  amd64Container hypervisor based on LXC - client

On Ubuntu 16.04 with 4.9.0 kernel from ubuntu ppa.


It seems to recover if I kill this process:

root 26853  0.0  0.0 222164 12228 ?Ssl  07:13   0:00
/usr/bin/lxd waitready --timeout=600


Sometimes need to kill it a few times until it recovers.


Jan 31 06:26:06 lxd01 lxd[15762]: error: LXD still not running after
600s timeout.
Jan 31 06:26:06 lxd01 lxd[4402]: t=2017-01-31T06:26:06+ lvl=warn
msg=&quo

[lxc-users] lxd hang - "panic: runtime error: slice bounds out of range"

2017-01-30 Thread Tomasz Chmielewski
0, 0x0)
Jan 31 06:36:06 lxd01 lxd[21952]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:321 
+0x5c
Jan 31 06:36:06 lxd01 lxd[21952]: 
main.(*storageLogWrapper).ContainerSnapshotDelete(0xc820107a60, 
0x7f0acd705938, 0xc8204c8180, 0x0, 0x0)
Jan 31 06:36:06 lxd01 lxd[21952]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage.go:510 
+0x22b
Jan 31 06:36:06 lxd01 lxd[21952]: 
main.(*containerLXC).Delete(0xc8204c8180, 0x0, 0x0)
Jan 31 06:36:06 lxd01 lxd[21952]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_lxc.go:2366 
+0x30e
Jan 31 06:36:06 lxd01 lxd[21952]: 
main.snapshotDelete.func1(0xc8200d42c0, 0x0, 0x0)
Jan 31 06:36:06 lxd01 lxd[21952]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_snapshot.go:248 
+0x3e
Jan 31 06:36:06 lxd01 lxd[21952]: 
main.(*operation).Run.func1(0xc8200d42c0, 0xc820011740)
Jan 31 06:36:06 lxd01 lxd[21952]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:110 
+0x3a

Jan 31 06:36:06 lxd01 lxd[21952]: created by main.(*operation).Run
Jan 31 06:36:06 lxd01 lxd[21952]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:137 
+0x127
Jan 31 06:46:06 lxd01 lxd[21955]: error: LXD still not running after 
600s timeout.





Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd hang - "panic: runtime error: slice bounds out of range"

2017-01-31 Thread Tomasz Chmielewski
Unfortunately it's still crashing, around 1 day after removing the 
docker container:


Feb  1 06:16:20 lxd01 lxd[2593]: error: LXD still not running after 600s 
timeout.
Feb  1 06:16:20 lxd01 systemd[1]: lxd.service: Control process exited, 
code=exited status=1

Feb  1 06:16:20 lxd01 systemd[1]: Failed to start LXD - main daemon.
Feb  1 06:16:20 lxd01 systemd[1]: lxd.service: Unit entered failed 
state.
Feb  1 06:16:20 lxd01 systemd[1]: lxd.service: Failed with result 
'exit-code'.
Feb  1 06:16:20 lxd01 systemd[1]: lxd.service: Service hold-off time 
over, scheduling restart.

Feb  1 06:16:20 lxd01 systemd[1]: Stopped LXD - main daemon.
Feb  1 06:16:20 lxd01 systemd[1]: Starting LXD - main daemon...
Feb  1 06:16:20 lxd01 lxd[29235]: t=2017-02-01T06:16:20+ lvl=warn 
msg="CGroup memory swap accounting is disabled, swap limits will be 
ignored."
Feb  1 06:16:20 lxd01 lxd[29235]: panic: runtime error: slice bounds out 
of range

Feb  1 06:16:20 lxd01 lxd[29235]: goroutine 16 [running]:
Feb  1 06:16:20 lxd01 lxd[29235]: panic(0xadef00, 0xc82000e050)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/usr/lib/go-1.6/src/runtime/panic.go:481 +0x3e6
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.(*storageBtrfs).getSubVolumes(0xc8200fa240, 0xc82000b880, 0x3a, 
0x0, 0x0, 0x0, 0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:812 
+0x104f
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.(*storageBtrfs).subvolsDelete(0xc8200fa240, 0xc82000b880, 0x3a, 
0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:574 
+0x72
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.(*storageBtrfs).ContainerDelete(0xc8200fa240, 0x7feeacffd0c0, 
0xc8204fa480, 0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:119 
+0xb0
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.(*storageBtrfs).ContainerSnapshotDelete(0xc8200fa240, 
0x7feeacffd0c0, 0xc8204fa480, 0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:321 
+0x5c
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.(*storageLogWrapper).ContainerSnapshotDelete(0xc8200fda60, 
0x7feeacffd0c0, 0xc8204fa480, 0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage.go:510 
+0x22b
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.(*containerLXC).Delete(0xc8204fa480, 0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_lxc.go:2366 
+0x30e
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.snapshotDelete.func1(0xc8205400b0, 0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_snapshot.go:248 
+0x3e
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.(*operation).Run.func1(0xc8205400b0, 0xc820258f60)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:110 
+0x3a

Feb  1 06:16:20 lxd01 lxd[29235]: created by main.(*operation).Run
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:137 
+0x127
Feb  1 06:16:20 lxd01 systemd[1]: lxd.service: Main process exited, 
code=exited, status=2/INVALIDARGUMENT



Any clues what's causing this and how to fix?


Tomasz

On 2017-01-31 16:29, Tomasz Chmielewski wrote:

I think it may be related to
https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/

I have a docker container, with several dockers inside, and with lxd 
snapshots.


Doing this:

# lxc delete docker
error: Get
http://unix.socket/1.0/operations/7d30bf41-3af6-4b48-b42c-06fdd2bba48b/wait:
EOF

Results in:

Jan 31 07:28:36 lxd01 lxd[8363]: panic: runtime error: slice bounds out 
of range

Jan 31 07:28:36 lxd01 lxd[8363]: goroutine 867 [running]:
Jan 31 07:28:36 lxd01 lxd[8363]: panic(0xadef00, 0xc82000e050)
Jan 31 07:28:36 lxd01 lxd[8363]:
#011/usr/lib/go-1.6/src/runtime/panic.go:481 +0x3e6
Jan 31 07:28:36 lxd01 lxd[8363]:
main.(*storageBtrfs).getSubVolumes(0xc8200f8240, 0xc82000bb80, 0x3a,
0x0, 0x0, 0x0, 0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]:
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:812
+0x104f
Jan 31 07:28:36 lxd01 lxd[8363]:
main.(*storageBtrfs).subvolsDelete(0xc8200f8240, 0xc82000bb80, 0x3a,
0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]:
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:574
+0x72
Jan 31 07:28:36 lxd01 lxd[8363]:
main.(*storageBtrfs).ContainerDelete(0xc8200f8240, 0x7f5f55ba1900,
0xc8204a0480, 0x0, 0x0)
Jan 31 07:28:36 lxd01 lxd[8363]:
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/l

Re: [lxc-users] lxd hang - "panic: runtime error: slice bounds out of range"

2017-02-01 Thread Tomasz Chmielewski

FYI it was crashing in the following conditions:

- privileged container (i.e. the one for docker)
- on btrfs filesystem
- btrfs subvolume created inside the container (docker would create such 
subvolumes)

- snapshot of the container made


Will your fix eventually go to i.e. 2.0.9?


Tomasz


On 2017-02-01 19:02, Stéphane Graber wrote:

Hey there,

I wrote a fix for that function just yesterday which I think should fix
your issue. It's been merged in git but isn't in any released version 
of

LXD yet.

Since you're using LXD 2.0.8, I made a build of 2.0.8 with that one fix
applied at: https://dl.stgraber.org/lxd-2.0.8-btrfs
SHA256: 
4d9a7ef7c4635d7dd6c3e41f0eb1a3db12d38a8148b3940aa801c7355510e815


Stéphane

On Wed, Feb 01, 2017 at 03:19:44PM +0900, Tomasz Chmielewski wrote:
Unfortunately it's still crashing, around 1 day after removing the 
docker

container:

Feb  1 06:16:20 lxd01 lxd[2593]: error: LXD still not running after 
600s

timeout.
Feb  1 06:16:20 lxd01 systemd[1]: lxd.service: Control process exited,
code=exited status=1
Feb  1 06:16:20 lxd01 systemd[1]: Failed to start LXD - main daemon.
Feb  1 06:16:20 lxd01 systemd[1]: lxd.service: Unit entered failed 
state.

Feb  1 06:16:20 lxd01 systemd[1]: lxd.service: Failed with result
'exit-code'.
Feb  1 06:16:20 lxd01 systemd[1]: lxd.service: Service hold-off time 
over,

scheduling restart.
Feb  1 06:16:20 lxd01 systemd[1]: Stopped LXD - main daemon.
Feb  1 06:16:20 lxd01 systemd[1]: Starting LXD - main daemon...
Feb  1 06:16:20 lxd01 lxd[29235]: t=2017-02-01T06:16:20+ lvl=warn
msg="CGroup memory swap accounting is disabled, swap limits will be
ignored."
Feb  1 06:16:20 lxd01 lxd[29235]: panic: runtime error: slice bounds 
out of

range
Feb  1 06:16:20 lxd01 lxd[29235]: goroutine 16 [running]:
Feb  1 06:16:20 lxd01 lxd[29235]: panic(0xadef00, 0xc82000e050)
Feb  1 06:16:20 lxd01 lxd[29235]:
#011/usr/lib/go-1.6/src/runtime/panic.go:481 +0x3e6
Feb  1 06:16:20 lxd01 lxd[29235]:
main.(*storageBtrfs).getSubVolumes(0xc8200fa240, 0xc82000b880, 0x3a, 
0x0,

0x0, 0x0, 0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:812

+0x104f
Feb  1 06:16:20 lxd01 lxd[29235]:
main.(*storageBtrfs).subvolsDelete(0xc8200fa240, 0xc82000b880, 0x3a, 
0x0,

0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:574

+0x72
Feb  1 06:16:20 lxd01 lxd[29235]:
main.(*storageBtrfs).ContainerDelete(0xc8200fa240, 0x7feeacffd0c0,
0xc8204fa480, 0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:119

+0xb0
Feb  1 06:16:20 lxd01 lxd[29235]:
main.(*storageBtrfs).ContainerSnapshotDelete(0xc8200fa240, 
0x7feeacffd0c0,

0xc8204fa480, 0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:321

+0x5c
Feb  1 06:16:20 lxd01 lxd[29235]:
main.(*storageLogWrapper).ContainerSnapshotDelete(0xc8200fda60,
0x7feeacffd0c0, 0xc8204fa480, 0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage.go:510

+0x22b
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.(*containerLXC).Delete(0xc8204fa480,

0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_lxc.go:2366

+0x30e
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.snapshotDelete.func1(0xc8205400b0,

0x0, 0x0)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_snapshot.go:248

+0x3e
Feb  1 06:16:20 lxd01 lxd[29235]: 
main.(*operation).Run.func1(0xc8205400b0,

0xc820258f60)
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:110

+0x3a
Feb  1 06:16:20 lxd01 lxd[29235]: created by main.(*operation).Run
Feb  1 06:16:20 lxd01 lxd[29235]: 
#011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:137

+0x127
Feb  1 06:16:20 lxd01 systemd[1]: lxd.service: Main process exited,
code=exited, status=2/INVALIDARGUMENT


Any clues what's causing this and how to fix?


Tomasz

On 2017-01-31 16:29, Tomasz Chmielewski wrote:
> I think it may be related to
> https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
>
> I have a docker container, with several dockers inside, and with lxd
> snapshots.
>
> Doing this:
>
> # lxc delete docker
> error: Get
> http://unix.socket/1.0/operations/7d30bf41-3af6-4b48-b42c-06fdd2bba48b/wait:
> EOF
>
> Results in:
>
> Jan 31 07:28:36 lxd01 lxd[8363]: panic: runtime error: slice bounds out
> of range
> Jan 31 07:28:36 lxd01 lxd[8363]: goroutine 867 [running]:
> Jan 31 07:28:36 lxd01 lxd[83

Re: [lxc-users] lxc stop / lxc reboot hang

2017-02-02 Thread Tomasz Chmielewski

On 2017-02-03 12:52, Tomasz Chmielewski wrote:

Suddenly, today, I'm not able to stop or reboot any of containers:

# lxc stop some-container

Just sits there forever.


In /var/log/lxd/lxd.log, only this single entry shows up:

t=2017-02-03T03:46:20+ lvl=info msg="Shutting down container"
creation date=2017-01-19T15:51:21+ ephemeral=false timeout=-1s
name=some-container action=shutdown


In /var/log/lxd/some-container/lxc.log, only this one shows up:

lxc 20170203034624.534 WARN lxc_commands -
commands.c:lxc_cmd_rsp_recv:172 - command get_cgroup failed to receive
response


The container actually stops (it's in STOPPED state in "lxc list").

The command just never returns.


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc stop / lxc reboot hang

2017-02-02 Thread Tomasz Chmielewski

Suddenly, today, I'm not able to stop or reboot any of containers:

# lxc stop some-container

Just sits there forever.


In /var/log/lxd/lxd.log, only this single entry shows up:

t=2017-02-03T03:46:20+ lvl=info msg="Shutting down container" 
creation date=2017-01-19T15:51:21+ ephemeral=false timeout=-1s 
name=some-container action=shutdown



In /var/log/lxd/some-container/lxc.log, only this one shows up:

lxc 20170203034624.534 WARN lxc_commands - 
commands.c:lxc_cmd_rsp_recv:172 - command get_cgroup failed to receive 
response



Running these:

ii  lxd  2.0.8-0ubuntu1~ubuntu16.04.2
amd64Container hypervisor based on LXC - daemon
ii  lxd-client   2.0.8-0ubuntu1~ubuntu16.04.2
amd64Container hypervisor based on LXC - client
ii  liblxc1  2.0.6-0ubuntu1~ubuntu16.04.2
amd64Linux Containers userspace tools (library)
ii  lxc-common   2.0.6-0ubuntu1~ubuntu16.04.2
amd64Linux Containers userspace tools (common tools)
ii  lxcfs2.0.5-0ubuntu1~ubuntu16.04.1
amd64FUSE based filesystem for LXC


# uname -a
Linux lxd01 4.9.0-040900-generic #201612111631 SMP Sun Dec 11 21:33:00 
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux



Any clues how to fix this?


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc stop / lxc reboot hang

2017-03-01 Thread Tomasz Chmielewski

Again seeing this issue on one of the servers:

- "lxc stop container" will stop the container but will never exit
- "lxc restart container" will stop the container and will never exit


# dpkg -l|grep lxd
ii  lxd  2.0.9-0ubuntu1~16.04.2  
amd64Container hypervisor based on LXC - daemon
ii  lxd-client   2.0.9-0ubuntu1~16.04.2  
amd64Container hypervisor based on LXC - client



This gets logged to container log:

lxc 20170301115514.738 WARN lxc_commands - 
commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive 
response: Connection reset by peer.



How can it be debugged?

Tomasz


On 2017-02-03 13:05, Tomasz Chmielewski wrote:

On 2017-02-03 12:52, Tomasz Chmielewski wrote:

Suddenly, today, I'm not able to stop or reboot any of containers:

# lxc stop some-container

Just sits there forever.


In /var/log/lxd/lxd.log, only this single entry shows up:

t=2017-02-03T03:46:20+ lvl=info msg="Shutting down container"
creation date=2017-01-19T15:51:21+ ephemeral=false timeout=-1s
name=some-container action=shutdown


In /var/log/lxd/some-container/lxc.log, only this one shows up:

lxc 20170203034624.534 WARN lxc_commands -
commands.c:lxc_cmd_rsp_recv:172 - command get_cgroup failed to receive
response


The container actually stops (it's in STOPPED state in "lxc list").

The command just never returns.


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc exec - cp: Value too large for defined data type

2016-10-04 Thread Tomasz Chmielewski
I'm getting a weird issue with cp used with "lxc exec container-name 
/bin/bash /some/script.sh".


/some/script.sh launches /some/other/script.sh, which in turn may run 
some other script.

One of them is copying data with cp.

In some cases, cp complains that "Value too large for defined data 
type", for example:


'/vagrant/scripts/provision/shell/initial_deploy/rootfs/etc/php5/pool.d/phpmyadmin.conf' 
-> '/etc/php5/fpm/pool.d/phpmyadmin.conf'
'/vagrant/scripts/provision/shell/initial_deploy/rootfs/etc/php5/pool.d/www.conf' 
-> '/etc/php5/fpm/pool.d/www.conf'
'/vagrant/scripts/provision/shell/initial_deploy/rootfs/etc/nginx/conf.d/default.conf' 
-> '/etc/nginx/conf.d/default.conf'
cp: 
'/vagrant/scripts/provision/shell/initial_deploy/rootfs/etc/nginx/conf.d/default.conf': 
Value too large for defined data type
cp: 
'/vagrant/scripts/provision/shell/initial_deploy/rootfs/etc/nginx/conf.d': 
Value too large for defined data type



"/vagrant/" is a bind-mount directory within LXD:

  vagrant:
path: /vagrant
source: /home/vagrantvm/Desktop/vagrant
type: disk


Anyone else seeing this?

The files are copied, just the warning is a bit strange.


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc exec - cp: Value too large for defined data type

2016-10-04 Thread Tomasz Chmielewski

On 2016-10-05 00:05, Michael Peek wrote:
I could be completely wrong about everything, but here's what I think 
is

going on:

If I'm correct then the version of cp you have inside the container was
compiled without large file support enabled.  What constitutes a 
"large"
file is dependent on whether you're using a 32-bit or 64-bit system.  
If

your container is running a 32-bit image, then a large file is any file
whose size is 2GB or larger in size.  For containers running a 64-bit
image a large file any file of size 4GB or larger.  To support large
files programs usually only need to be compiled with certain compiler
flags enabled to tell the compiler to activate "large file"-specific
code within the source, either that or the libraries that the program
depends upon need to support large files by default.


The container is Ubuntu 14.04; host is Ubuntu 16.04.

"cp" is copying config files.

They are a few hundred bytes in size, few kilobytes maximum.


So must be something else.


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc exec - cp: Value too large for defined data type

2016-10-04 Thread Tomasz Chmielewski

On 2016-10-05 00:41, Tomasz Chmielewski wrote:

On 2016-10-05 00:05, Michael Peek wrote:
I could be completely wrong about everything, but here's what I think 
is

going on:

If I'm correct then the version of cp you have inside the container 
was
compiled without large file support enabled.  What constitutes a 
"large"
file is dependent on whether you're using a 32-bit or 64-bit system.  
If
your container is running a 32-bit image, then a large file is any 
file

whose size is 2GB or larger in size.  For containers running a 64-bit
image a large file any file of size 4GB or larger.  To support large
files programs usually only need to be compiled with certain compiler
flags enabled to tell the compiler to activate "large file"-specific
code within the source, either that or the libraries that the program
depends upon need to support large files by default.


The container is Ubuntu 14.04; host is Ubuntu 16.04.

"cp" is copying config files.

They are a few hundred bytes in size, few kilobytes maximum.


So must be something else.


I think it comes from the fact that the files I'm copying from were 
treated with setfacl on the host.
And inside the container, cp is unable to recreate exactly same 
permissions, and outputs a confusing warning:


* cp with "-a":

lxd# cp -av /vagrant/0-byte-file /root
'/vagrant/0-byte-file' -> '/root/0-byte-file'
cp: '/vagrant/0-byte-file': Value too large for defined data type


* cp without "-a":

lxd# cp -v /vagrant/0-byte-file /root
'/vagrant/0-byte-file' -> '/root/0-byte-file'



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] dockerfile equivalent in LXC

2016-10-04 Thread Tomasz Chmielewski

On 2016-10-05 00:13, Barve, Yogesh Damodar wrote:

For creating  a docker container one can use dockerfile to specify all
the required software installation packages and initialization,
entrypoint directory and entrypoint command.


LXD or LXC virtualize the whole operating system, so some of these terms 
don't make sense here.




What will be the equivalent in the LXC world? How can one specify
- the required packages for installations,


LXD installs a distribution.

For example - this one will install Ubuntu 14.04 ("Trusty"):

lxc launch images:ubuntu/trusty/amd64 some-container-name


Then, to install any packages, you can do:

lxc exec timedoctor-dev apt-get install some-package



- workdirectory,
- entrypoint command,


These don't make sense for LXC / LXD.



- ports to expose and


LXC / LXD behave like proper systems with full networking.

By default, container's IP is "exposed" to the host. What you do with 
it, depends on your usage case.


There are many answers to that question I guess.

1) assign a public IP to the container

2) redirect a single port to the container with iptables or a proxy



- volumes to mount in LXC?


CONTAINER=some-container-name
MOUNTNAME=something
MOUNTPATH=/mnt/on/the/host
CONTAINERPATH=/mnt/inside/the/container
lxc config device add $CONTAINER $MOUNTNAME disk source=$MOUNTPATH 
path=$CONTAINERPATH



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LAN for LXD containers (with multiple LXD servers)?

2016-09-18 Thread Tomasz Chmielewski
It's easy to create a "LAN" for LXD containers on a single LXD server - 
just attach them to the same bridge, use the same subnet (i.e. 
10.10.10.0/24) - done. Containers can communicate with each other using 
their private IP address.


However, with more then one LXD server *not* in the same LAN (i.e. two 
LXD servers in different datacentres), the things get tricky.



Is anyone using such setups, with multiple LXD servers and containers 
being able to communicate with each other?



LXD1: IP 1.2.3.4, EuropeLXD2: IP 2.3.4.5, Asia
container1, 10.10.10.10 container4, 10.10.10.20
container2, 10.10.10.11 container5, 10.10.10.21
container3, 10.10.10.12 container6, 10.10.10.22


LXD3: IP 3.4.5.6, US
container7, 10.10.10.30
container8, 10.10.10.31
container8, 10.10.10.32


While I can imagine setting up many OpenVPN tunnels between all LXD 
servers (LXD1-LXD2, LXD1-LXD3, LXD2-LXD3) and constantly adjusting the 
routes as containers are stopped/started/migrated, it's a bit of a 
management nightmare. And even more so if the number of LXD servers 
grows.


Hints, discussion?


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LAN for LXD containers (with multiple LXD servers)?

2016-09-18 Thread Tomasz Chmielewski

On 2016-09-18 21:05, Sergiusz Pawlowicz wrote:
On Sun, Sep 18, 2016 at 4:16 PM, Tomasz Chmielewski <man...@wpkg.org> 
wrote:


While I can imagine setting up many OpenVPN tunnels between all LXD 
servers


I cannot imagine that :-) :-)

Use tinc, mate. Your life begins :-)

https://www.tinc-vpn.org/


I did some reading about tinc before, and according to documentation and 
mailing lists:


- performance may not be so great

- it gets problematic as the number of tinc instances grows (few will be 
OK, dozens will work, but beyond that, the things might get slowish)


- if I'm not mistaken, you need to run a tinc instance per LXD client, 
not per LXD server, so that's extra management and performance overhead 
(i.e. if two tinc clients are running on the same server, they would 
still encrypt the traffic to each other)



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LAN for LXD containers (with multiple LXD servers)?

2016-09-18 Thread Tomasz Chmielewski

On 2016-09-18 22:14, Ron Kelley wrote:

(Long reply follows…)

Personally, I think you need to look at the big picture for such
deployments.  From what I read below, you are asking, “how do I extend
my layer-2 subnets between data centers such that container1 in Europe
can talk with container6 in Asia, etc”.  If this is true, I think you
need to look at deploying data center hardware (servers with multiple
NICs, IPMI/DRAC/iLO interfaces) with proper L2/L3 routing (L2TP/IPSEC,
etc).  And, you must look at how your failover services will work in
this design.  It’s easy to get a couple of servers working with a
simple design, but those simple designs tend to go to production very
fast without proper testing and design.


Well, it's not only about deploying on "different continents".

It can be also in the same datacentre, where the hosting doesn't give 
you a LAN option.


For example - Amazon AWS, same region, same availability zone.

The servers will have "private" addresses like 10.x.x.x, traffic there 
will be private to your servers, but there will be no LAN. You can't 
assign your own LAN addresses (10.x.x.x).


This means, while you can launch several LXD containers on every of 
these servers - but their "LAN" will be limited per each LXD server 
(unless we do some special tricks).


Some other hostings offer a public IP, or several public IPs per 
servers, in the same datacentre, but again, no LAN.



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LAN for LXD containers (with multiple LXD servers)?

2016-09-19 Thread Tomasz Chmielewski

On 2016-09-19 05:12, Tilak Waelde wrote:

Hope this helps.  Happy to share my LXD configurations with anyone...


Please do! I'd really love to see a description of a production lxd /
lxc setup with proper networking and multiple hosts!

I haven't played around with it yet, but is it possible to include
some sort of VRF-lite[0] into such a setup for multi tenancy purposes?
Other than by using VLANs one could use the same IP ranges multiple
times from what I've come to understand?
I'm not sure how a user could put the containers interfaces into a
different network namespace..


Hi,

after some experimenting with VXLAN, I've summed up a working "LAN for 
multiple LXC servers" here:


https://lxadm.com/Unicast_VXLAN:_overlay_network_for_multiple_servers_with_dozens_of_containers


It is using in kernel VXLAN, and thus performs very well (almost wire 
speed, and much much better than any userspace programs).


On the other hand, it provides no encryption between LXD servers (or, in 
fact, any other virtualization), so may depend on your exact 
requirements.



Tomasz Chmielewski
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Will there be an extra section added to the LXD 2.0 blog post series for the new Networking capabilities?

2016-09-28 Thread Tomasz Chmielewski

On 2016-09-29 12:03, Stéphane Graber wrote:

On Wed, Sep 28, 2016 at 10:56:48PM -0400, brian mullan wrote:

The current 12 part blog post series is really helpful & informative:

https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

But all the newly announced LXD 2.3 networking features
<https://linuxcontainers.org/lxd/news/> are pretty exciting and it 
would be

great to
see a chapter on that included in that Blog Post Series too in order 
to

jump start folks on how to use all of the new capabilities.


It's not going to be part of the 2.0 series since it's not in LXD 2.0,
but I'll likely be posting something about the new network stuff in the
next few weeks.


"cross-host tunnels with GRE or VXLAN"

Interesting!

Will it be limited to 2 LXD servers only, or will it allow an arbitrary 
number of LXD servers (2, 3, 4 and more)?



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Will there be an extra section added to the LXD 2.0 blog post series for the new Networking capabilities?

2016-09-28 Thread Tomasz Chmielewski

On 2016-09-29 12:14, Stéphane Graber wrote:


>It's not going to be part of the 2.0 series since it's not in LXD 2.0,
>but I'll likely be posting something about the new network stuff in the
>next few weeks.

"cross-host tunnels with GRE or VXLAN"

Interesting!

Will it be limited to 2 LXD servers only, or will it allow an 
arbitrary

number of LXD servers (2, 3, 4 and more)?


You can add as many tunnels to the configuration as you want.

The default VXLAN configuration also uses multicast, so any host that's
part of the same multicast group will be connected.


Multicast... so that's not going to work for most of people with popular 
server hostings, since they don't offer multicast.



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Container free memory vs host and OOM errors

2016-09-26 Thread Tomasz Chmielewski

On 2016-09-27 11:24, Mathias Gibbens wrote:

Hi,

  Recently I've been setting up unprivileged LXC containers on an older
server that has 6GB of physical RAM. As my containers are running,
occasionally I am seeing OOM errors in the host's syslog when the 
kernel




  This system is running Debian stretch (currently the "testing"
distribution), 64bit kernel 4.7.4-grsec


Might be 4.7.x kernel.

There was some OOM regression in 4.7.x, but I'm not sure if it was fixed 
in 4.7.4 or not.



Tomasz Chmielewski
https://lxadm.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD: network connectivity dies when doing lxc stop / lxc start

2016-10-28 Thread Tomasz Chmielewski
Here is a weird one, and most likely not LXD fault, but some issue with 
bridged networking.


I'm using bridge networking for all container. The problem is that if I 
stop and start one container, some other containers loose connectivity. 
They loose connectivity for 10, 20 secs, sometimes up to a minute.



For example:

# ping 8.8.8.8
(...)
64 bytes from 8.8.8.8: icmp_seq=16 ttl=48 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=17 ttl=48 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=18 ttl=48 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=19 ttl=48 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=20 ttl=48 time=15.0 ms

...another container stopped/started...
...40 seconds of broken connectivity...

64 bytes from 8.8.8.8: icmp_seq=60 ttl=48 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=61 ttl=48 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=62 ttl=48 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=63 ttl=48 time=15.0 ms


Pinging the gateway dies in a similar way.

The networking is as follows:

containers - eth0, private addressing (192.168.0.x)
host - "ctbr0" - private address (192.168.0.1), plus NAT into the world


auto ctbr0
iface ctbr0 inet static
address 192.168.0.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0


The only workaround seems to be arpinging the gateway from the container 
all the time, for example:


# arping 192.168.0.1

This way, the container doesn't loose connectivity when other containers 
are stopped/started.


But of course I don't like this kind of fix.

Is anyone else seeing this too? Any better workaround that constant 
arping from all affected containers?



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-03 Thread Tomasz Chmielewski

ZFS is not a distributed filesystem.

So the only way to do what you want is to use DRBD, and ZFS on top of 
it.



Tomasz Chmielewski
https://lxadm.com


On 2016-11-03 22:42, Benoit GEORGELIN - Association Web4all wrote:

Thanks, looks like nobody use LXD in a cluster

Cordialement,

Benoît

-

DE: "Tomasz Chmielewski" <man...@wpkg.org>
À: "lxc-users" <lxc-users@lists.linuxcontainers.org>
CC: "Benoit GEORGELIN - Association Web4all"
<benoit.george...@web4all.fr>
ENVOYÉ: Mercredi 2 Novembre 2016 12:01:50
OBJET: Re: [lxc-users] Question about your storage on multiple LXC/LXD
nodes

On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote:

Hi,

I'm wondering what kind of storage are you using in your
infrastructure ?
In a multiple LXC/LXD nodes how would you design the storage part to
be redundant and give you the flexibility to start a container from
any host available ?

Let's say I have two (or more) LXC/LXD nodes and I want to be able

to

start the containers on one or the other node.
LXD allow to move containers across nodes by transferring the data
from node A to node B but I'm looking to be able to run the

containers

on node B if node A is in maintenance or crashed.

There is a lot of distributed file system (gluster, ceph, beegfs,
swift etc..)  but I my case, I like using ZFS with LXD and I would
like to try to keep that possibility .


If you want to stick with ZFS, then your only option is setting up
DRBD.

Tomasz Chmielewski
https://lxadm.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-02 Thread Tomasz Chmielewski

On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote:

Hi,

I'm wondering what kind of storage are you using in your
infrastructure ?
In a multiple LXC/LXD nodes how would you design the storage part to
be redundant and give you the flexibility to start a container from
any host available ?

Let's say I have two (or more) LXC/LXD nodes and I want to be able to
start the containers on one or the other node.
LXD allow to move containers across nodes by transferring the data
from node A to node B but I'm looking to be able to run the containers
on node B if node A is in maintenance or crashed.

There is a lot of distributed file system (gluster, ceph, beegfs,
swift etc..)  but I my case, I like using ZFS with LXD and I would
like to try to keep that possibility .


If you want to stick with ZFS, then your only option is setting up DRBD.


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Strange freezes with btrfs backend

2016-12-03 Thread Tomasz Chmielewski

On 2016-12-03 18:31, Pierce Ng wrote:

On Sat, Dec 03, 2016 at 11:49:02AM +0700, Sergiusz Pawlowicz wrote:

With 1GB of memory is is not recommended to use ZFS not BTRFS,
especially via a disk image file. Just forget about it.


The same VPS - as same as a VPS can be :-) - ran LXC on Ubuntu 14.04 
fine. For
my work load perhaps the directory or LVM backends for LXD are good 
enough.


Don't use a file-based image for a different filesystem - your 
performance will be poor, and you risk loosing the whole filesystem if 
something wrong happens to your main fs.


Exceptions to this rule are perhaps:

- testing
- recovery
- KVM set up to use a file-based image for a VM (still, not perfect)


If you want to use LXD with btrfs, it's still quite easy to do, though 
stick to the following rules:


- btrfs should be placed on a separate block device (disk/partition) - 
so, let's say /dev/xvdb1 mounted to /var/lib/lxd


- btrfs is still not very stable (no RAID and RAID-1 levels are fine for 
basic workloads, though you may still hit an occasional hiccup) - use 
the most recent stable kernel with btrfs (for Ubuntu, you can get it 
from http://kernel.ubuntu.com/~kernel-ppa/mainline/ - you'll need 
"raw.lxc: lxc.aa_allow_incomplete=1" in container's config); kernel 4.4 
used in Ubuntu 16.04 is too old and you will run into problems after 
some time!


- chattr +C on database (mysql, postgres, mongo, elasticsearch) 
directories, otherwise, performance will be very poor (note chattr +C 
only works on directories and empty files; for existing files, you'll 
have to move them out and copy back in)



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc copy local remote: - how to copy a container without snapshots?

2016-12-01 Thread Tomasz Chmielewski

lxc copy local remote: seems to copy a container with all its snapshots.

Is it possible to use "lxc copy local remote:" so it just copies 
/var/lib/lxc/containers/local, without snapshots?



Tried these on the "local" side (with 2.0.5 on the remote):

# lxc --version
2.6.2

# lxc --version
2.0.5


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] systemd-cgtop - Input/s Output/s empty?

2017-01-08 Thread Tomasz Chmielewski
Not really LXD issue, but since systemd-cgtop is quite useful to check 
how busy the containers are, posting it here.


Does anyone know how to make systemd-cgtop show Input/s Output/s? Right 
now, it's only showing Tasks, %CPU, Memory - but Input/s and Output/s 
column just show "-".



I've tried setting DefaultBlockIOAccounting=yes in 
/etc/systemd/system.conf, but it doesn't change anything (even after 
systemd reload, system restart).


Any hints?


Tomasz Chmielewski
https://lxadm.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] glusterfs on LXD?

2016-12-21 Thread Tomasz Chmielewski

On 2016-12-22 11:56, Tomasz Chmielewski wrote:

Ubuntu 16.04 hosts / containers and gluster 3.8:

# gluster volume create storage replica 2 transport tcp
serv1:/gluster/data serv2:/gluster/data force
volume create: storage: failed: Glusterfs is not supported on brick:
serv1:/gluster/data.
Setting extended attributes failed, reason: Operation not permitted.


Host filesystem on both bricks supports xattr - but container can only
set user attributes, not trusted attributes:

# touch file
# setfattr -n user.some -v "thing" file
# getfattr file
# file: file
user.some

# setfattr -n trusted.some2 -v "thing2" file
setfattr: file: Operation not permitted


Anyone managed to run glusterfs on LXD?


I see it does run if the container is run as privileged:

# lxc config set serv1 security.privileged true

But perhaps not sysadmin's dream.

Is there any other way to allow the container setting trusted attr?



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] glusterfs on LXD?

2016-12-21 Thread Tomasz Chmielewski

Ubuntu 16.04 hosts / containers and gluster 3.8:

# gluster volume create storage replica 2 transport tcp 
serv1:/gluster/data serv2:/gluster/data force
volume create: storage: failed: Glusterfs is not supported on brick: 
serv1:/gluster/data.

Setting extended attributes failed, reason: Operation not permitted.


Host filesystem on both bricks supports xattr - but container can only 
set user attributes, not trusted attributes:


# touch file
# setfattr -n user.some -v "thing" file
# getfattr file
# file: file
user.some

# setfattr -n trusted.some2 -v "thing2" file
setfattr: file: Operation not permitted


Anyone managed to run glusterfs on LXD?


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD - Small Production Deployment - Storage

2017-03-29 Thread Tomasz Chmielewski

On 2017-03-29 22:13, Gabriel Marais wrote:

Hi Guys

If this is the incorrect platform for this post, please point me in
the right direction.

We are in the process of deploying a small production environment with
the following equipment:-

2 x Dell R430 servers each with 128GB Ram and 3 x 600GB SAS 10k drives
1 x Dell PowerVault MD3400 with
3 x 600GB 15k SAS Drives
3 x 6TB 7.2k Nearline SAS drives

The PowerVault is cabled directly to the Host Servers via Direct
Attached Storage, redundantly.


We would like to run a mixture of KVM and LXD containers on both Host 
Servers.


The big question is, how do we implement the PowerVault (and to a
certain extent the storage on the Host Servers themselves) to be most
beneficial in this mixed environment.

I have a few ideas on what I could do, but since I don't have much
experience with shared storage, I am probably just picking straws and
would like to hear from others that probably has more experience than
me.


NFS for LXD and iSCSI for KVM?

Just don't use NFS clients using NFS server on the same machine (same 
kernel), as this will break (hangs).



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD - Small Production Deployment - Storage

2017-03-29 Thread Tomasz Chmielewski

On 2017-03-29 22:47, Marat Khalili wrote:
Just don't use NFS clients using NFS server on the same machine (same 
kernel), as this will break (hangs).

Huh? Works for me between LXC containers. Only had to tune
startup/shutdown sequence in systemd. In what exactly situation does
it hang? /worried/


Kernel deadlocks.

Note that you can have NFS client on machineA and NFS server on machineB 
- this is fine, it's two different kernels.


NFS client on machineA, and KVM guest with NFS server also on machineA 
is fine, too - two different kernels.



NFS client on machineA and NFS server on machineA, same kernel (as is 
the case with LXD) - will eventually deadlock.


The whole machine will not freeze out of the sudden, but you will notice 
you have more and more processes in D state, which you're not able to 
kill.



See for example:

https://lwn.net/Articles/595652/

It could be something improved in newer kernels, but I'm not aware of 
that.



If you need to export NFS-like mounts on the same server, you can use 
gluster mount, which is userspace - and which has poor performance, not 
really suitable for containers, but should be OK for i.e. large files 
accessed directly (no recursive find etc.).



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD on EC2/AWS - containers not getting IP addresses?

2017-03-22 Thread Tomasz Chmielewski

I've just launched a Ubuntu 16.04 instance on EC2.

Unfortunately, LXD containers are not getting an IP address.


# lxc launch ubuntu:16.04 website01

# lxc list
+---+-+--+--++---+
|   NAME|  STATE  | IPV4 | IPV6 |TYPE| SNAPSHOTS |
+---+-+--+--++---+
| website01 | RUNNING |  |  | PERSISTENT | 0 |
+---+-+--+--++---+


# dpkg-reconfigure -p medium lxd
(set IPv4 network)


# reboot


Still no IP address in the container:

# lxc list
+---+-+--+--++---+
|   NAME|  STATE  | IPV4 | IPV6 |TYPE| SNAPSHOTS |
+---+-+--+--++---+
| website01 | RUNNING |  |  | PERSISTENT | 0 |
+---+-+--+--++---+



dhclient is running:

root  1023  0.0  0.0  16124   864 ?Ss   03:50   0:00 
/sbin/dhclient -1 -v -pf /run/dhclient.eth0.pid -lf 
/var/lib/dhcp/dhclient.eth0.leases -I -df 
/var/lib/dhcp/dhclient6.eth0.leases eth0



lxdbr0 is present:

lxdbr0Link encap:Ethernet  HWaddr fe:3e:3b:09:3d:e4
  inet addr:10.76.6.1  Bcast:0.0.0.0  Mask:255.255.255.0
  inet6 addr: fe80::8091:8dff:fe30:bfb3/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:8 errors:0 dropped:0 overruns:0 frame:0
  TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:536 (536.0 B)  TX bytes:570 (570.0 B)


# brctl show
bridge name bridge id   STP enabled interfaces
lxdbr0  8000.fe3e3b093de4   no  vethHHJNCH

# dpkg -l|grep lxd
ii  lxd  2.0.9-0ubuntu1~16.04.2 
amd64Container hypervisor based on LXC - daemon
ii  lxd-client   2.0.9-0ubuntu1~16.04.2 
amd64Container hypervisor based on LXC - client



What step did I miss?


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD on EC2/AWS - containers not getting IP addresses?

2017-03-22 Thread Tomasz Chmielewski

On 2017-03-23 13:00, Tomasz Chmielewski wrote:

I've just launched a Ubuntu 16.04 instance on EC2.

Unfortunately, LXD containers are not getting an IP address.


(...)


What step did I miss?


I see that I've run "dpkg-reconfigure -p medium lxd" too late (after the 
container was created), and because of this eth0 was set to manual (in 
/etc/network/interfaces.d/):



auto eth0
iface eth0 inet manual


New containers have it properly set to dhcp.


So - problem solved! :)


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] preferred way to redirect ports to containers with private IPs?

2017-04-05 Thread Tomasz Chmielewski
Is there any "preferred" way of redirecting ports to containers with 
private IPs, from host's public IP(s)?



host 12.13.14.15:53/udp (public IP) -> container 10.1.2.3:53/udp 
(private IP)



I can imagine at least a few approaches:

1) in kernel:

- use iptables to map a port from host's public IP to container's 
private IP


- use LVS/ipvs/ldirectord to map a port from host's public IP to 
container's private IP



2) userspace:

- use a userspace proxy, like haproxy (won't work for all protocols, 
some information is lost for the container, i.e. origin IP)



They all however need some manual (or scripted) configuration, will stay 
even if the container is stopped/removed (unless some more 
configuration/scripting is done etc.).



Does LXD have any built-in mechanism to "redirect ports"? Or, what would 
be the preferred way to do it?



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Experience with large number of LXC/LXD containers

2017-04-05 Thread Tomasz Chmielewski

On 2017-03-13 06:28, Benoit GEORGELIN - Association Web4all wrote:

Hi lxc-users ,

I would like to know if you have any experience with a large number of
LXC/LXD containers ?
In term of performance, stability and limitation .

I'm wondering for exemple, if having 100 containers behave the same of
having 1.000 or 10.000  with the same configuration to avoid to talk
about container usage.


I'm running LXD on several servers and I'm generally satisfied with it - 
performance, stability are fine. They are mostly <50 containers though.


I also have a LXD server which runs 100+ containers, which 
starts/stops/deletes dozens of containers daily and is used for 
automation. Approximately once every 1-2 months, "lxc stop" / "lxc 
restart" command will fail, which is a bit of stability concern for us.


The cause is unclear. In LXD log for the container, the only thing 
logged is:



lxc 20170301115514.738 WARN lxc_commands - 
commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive 
response: Connection reset by peer.



When it starts to happen, it affects all containers - "lxc stop / lxc 
restart" will hang for any of the running containers. What's 
interesting, the container gets stopped with "lxc stop", the command 
just never returns. For "lxc restart" case, it will just stop the 
container (and the command will not return / will not start the 
container again).


The only thing which fixes that is server restart.

There is also no clear way to reproduce it reliably (other than running 
the server for long, and starting/stopping a large number of containers 
over that time...).


I think it's some kernel issue, but unfortunately I was not able to 
debug this any further.




Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Run GUI in LXD 2.0+

2017-04-15 Thread Tomasz Chmielewski

On 2017-04-15 22:39, Felipe wrote:

Instructions:

a) In LXD container:

*  lxc launch ubuntu: pokemon

*  lxc exec pokemon bash

*  add-apt-repository ppa:x2go/stable && sudo apt-get update

*  apt-get install xfce4 xfce4-goodies xfce4-artwork
xubuntu-icon-theme firefox x2goserver x2goserver-xsession

*  adduser pikachu

*  vi /etc/ssh/sshd_config
 # Change to no to disable tunnelled clear text passwords
 PasswordAuthentication yes
*  /etc/init.d/ssh restart


While x2go is great for remote connectivity, it can be really simpler, 
if you're running the container locally (or over low-latency, high speed 
network) and don't need disconnected sessions etc.



Run this once:

* LXD host:

lxc launch ubuntu: pokemon
lxc exec pokemon bash


* container

apt install openssh-server firefox
adduser pikachu

# add your ssh key for pikachu



Then, connect with ssh -X:

ssh -X container_IP
export MOZ_NO_REMOTE=1 ; firefox


MOZ_NO_REMOTE=1 in the container is needed in case you run Firefox both 
locally and over SSH - otherwise, it won't be possible to start two 
separate Firefox instances.



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc archive / lxc restore command?

2017-04-13 Thread Tomasz Chmielewski

It might work, but looks like an overkill to me.

Essentially, I want to do:

- full container backup - then restore it, perhaps somewhere else, at 
some point in the future


Can I simply copy /var/lib/lxd/containers/MYCONTAINER to a different 
server an expect it to work? I think it won't work, as lxd maintains its 
database, and if I just copy the container, it won't modify the 
database.



- archive the container and use it in a physically different network

Meaning, one LXD server not able to access the other in any way.



I know I can just create a new container on the destination, stop it, 
then replace "rootfs" with the one from the source system. But it also 
doesn't look "nice enough".



And, we loose any container settings set with all above approaches.


Tomasz Chmielewski
https://lxadm.com





On 2017-04-13 21:28, Ron Kelley wrote:

What about “lxc publish”?  That should allow you to publish your
container as an image to your local repository.  Then, you can
probably use the “lxc image” command to pull it from your repository
and start it.




On Apr 13, 2017, at 8:22 AM, Tomasz Chmielewski <man...@wpkg.org> 
wrote:


Is there such a thing like "lxc archive / lxc restore"? Or perhaps 
"lxc export / lxc import"?



I have a container called "mycontainer". I would like to "archive" it, 
and save to backup.


Then, perhaps a month later, I would like to restore it, on a 
different LXD server.



Right now, the closest command I see there is is "lxc copy".

Unfortunately it requires:

- the copying part needs to be done "now"

- at least one server needs to be able to manage the other server (lxc 
remote)



In other words - how to best achieve:

- tar a selected container

- copy it via SSH somewhere

- restore at some later point in time somewhere else, on a different, 
unrelated LXD server




Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc archive / lxc restore command?

2017-04-13 Thread Tomasz Chmielewski
Is there such a thing like "lxc archive / lxc restore"? Or perhaps "lxc 
export / lxc import"?



I have a container called "mycontainer". I would like to "archive" it, 
and save to backup.


Then, perhaps a month later, I would like to restore it, on a different 
LXD server.



Right now, the closest command I see there is is "lxc copy".

Unfortunately it requires:

- the copying part needs to be done "now"

- at least one server needs to be able to manage the other server (lxc 
remote)



In other words - how to best achieve:

- tar a selected container

- copy it via SSH somewhere

- restore at some later point in time somewhere else, on a different, 
unrelated LXD server




Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD iptables rules and iptables-persistent?

2017-04-15 Thread Tomasz Chmielewski

LXD adds its own iptables rules.


When there are other iptables rules applied on the system with 
iptables-persistent (i.e. blocking incoming IPv6 traffic on a laptop, if 
provider/router does not do it) - this will basically wipe the rules 
which LXD applies on startup.



What's the recommended approach to deal with it?

Adding LXD rules to iptables-persistent?


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Run GUI in LXD 2.0+

2017-04-15 Thread Tomasz Chmielewski

It's just an example, like 'foo'.


Tomasz Chmielewski
https://lxadm.com


On 2017-04-16 09:54, gunnar.wagner wrote:

why does is have to be ubuntu:pokemon? why not any OS ? or is
'pokemon' just a placeholder like 'foo'

On 4/15/2017 11:29 PM, Matlink wrote:


Well, the lxc 1.0 version didn't require any network stack: it
mounted the X11 required files in the container.

However in lxc 2.0 I would like a solution like this, not using x2go
or ssh X11 forwarding.

The ideal solution for me would be to be able to run 'lxc exec test
firefox' to run Firefox in the container.

Le 15 avril 2017 16:09:00 GMT+02:00, Tomasz Chmielewski
<man...@wpkg.org> a écrit :

On 2017-04-15 22:39, Felipe wrote:
Instructions:

a) In LXD container:

*  lxc launch ubuntu: pokemon

*  lxc exec pokemon bash

*  add-apt-repository ppa:x2go/stable && sudo apt-get update

*  apt-get install xfce4 xfce4-goodies xfce4-artwork
xubuntu-icon-theme firefox x2goserver x2goserver-xsession

*  adduser pikachu

*  vi /etc/ssh/sshd_config
# Change to no to disable tunnelled clear text passwords
PasswordAuthentication yes
*  /etc/init.d/ssh restart

While x2go is great for remote connectivity, it can be really
simpler,
if you're running the container locally (or over low-latency, high
speed
network) and don't need disconnected sessions etc.

Run this once:

* LXD host:

lxc launch ubuntu: pokemon
lxc exec pokemon bash

* container

apt install openssh-server firefox
adduser pikachu

# add your ssh key for pikachu

Then, connect with ssh -X:

ssh -X container_IP
export MOZ_NO_REMOTE=1 ; firefox

MOZ_NO_REMOTE=1 in the container is needed in case you run Firefox
both
locally and over SSH - otherwise, it won't be possible to start two
separate Firefox instances.

Tomasz Chmielewski
https://lxadm.com

-

lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

 -- Matlink

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

-- Gunnar Wagner | Yongfeng Village Group 12 #5, Pujiang Town, Minhang
District, 201112 Shanghai, P.R. CHINA mob +86.159.0094.1702 | skype:
professorgunrad | wechat: 15900941702
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd process using lots of CPU

2017-03-09 Thread Tomasz Chmielewski

On 2017-03-10 03:16, Stéphane Graber wrote:


Hmm, then it matches another such report I've seen where some of the
threads are reported as using a lot of CPU, yet when trying to trace
them you don't actually see anything.

Can you try to run "strace -p" on the various threads that are reported
as eating all your CPU?

The similar report I got of this would just show them stuck on a futex,
which wouldn't explain the CPU use. And unfortunately it looked like
tracing the threads actually somehow fixed the CPU problem for that
user...


If you just want the problem gone, "systemctl restart lxd" should fix
things without interrupting your containers, but we'd definitely like 
to

figure this one out if we can.


Yes, restarting lxd fixed it.

stracing different threads was showing a similar output to what I've 
pasted before. Stuck in some kind of loop?



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxd process using lots of CPU

2017-03-09 Thread Tomasz Chmielewski

On a server with several ~idlish containers:


  PID USER  PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
19104 root   20   0 2548M 44132 15236 S 140.  0.0 58h03:17 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
24966 root   20   0 2548M 44132 15236 S 18.2  0.0  2h45:36 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
19162 root   20   0 2548M 44132 15236 S 17.5  0.0  3h31:49 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
19120 root   20   0 2548M 44132 15236 S 16.2  0.0  3h16:11 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
19244 root   20   0 2548M 44132 15236 S 11.0  0.0  1h48:56 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
19123 root   20   0 2548M 44132 15236 S 11.0  0.0  3h34:42 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
19243 root   20   0 2548M 44132 15236 S 10.4  0.0  1h06:13 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
14962 root   20   0 2548M 44132 15236 R 10.4  0.0  3h17:27 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
19356 root   20   0 2548M 44132 15236 S  9.7  0.0  2h16:44 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
19161 root   20   0 2548M 44132 15236 R  9.7  0.0  1h26:40 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
19126 root   20   0 2548M 44132 15236 R  9.1  0.0 22:11.20 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
19115 root   20   0 2548M 44132 15236 R  8.4  0.0  2h55:21 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
  693 root   20   0 2548M 44132 15236 R  8.4  0.0  2h28:02 
/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log



That's actually one lxd process with many threads; view from htop.

Expected?

ii  liblxc12.0.7-0ubuntu1~16.04.1
  amd64Linux Containers userspace tools (library)
ii  lxc-common 2.0.7-0ubuntu1~16.04.1
  amd64Linux Containers userspace tools (common tools)
ii  lxcfs  2.0.6-0ubuntu1~16.04.1
  amd64FUSE based filesystem for LXC
ii  lxd2.0.9-0ubuntu1~16.04.2
  amd64Container hypervisor based on LXC - daemon
ii  lxd-client 2.0.9-0ubuntu1~16.04.2
  amd64Container hypervisor based on LXC - client




strace of the process mainly shows:

[pid 19124] poll([{fd=28, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19120] poll([{fd=13, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19124] <... poll resumed> )= 1 ([{fd=28, 
revents=POLLNVAL}])
[pid 19120] <... poll resumed> )= 1 ([{fd=13, 
revents=POLLNVAL}])
[pid 19124] poll([{fd=28, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19120] poll([{fd=13, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19124] <... poll resumed> )= 1 ([{fd=28, 
revents=POLLNVAL}])
[pid 19120] <... poll resumed> )= 1 ([{fd=13, 
revents=POLLNVAL}])
[pid 19124] poll([{fd=28, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19120] poll([{fd=13, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19124] <... poll resumed> )= 1 ([{fd=28, 
revents=POLLNVAL}])
[pid 19120] <... poll resumed> )= 1 ([{fd=13, 
revents=POLLNVAL}])
[pid 19124] poll([{fd=28, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19120] poll([{fd=13, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19124] <... poll resumed> )= 1 ([{fd=28, 
revents=POLLNVAL}])
[pid 19120] <... poll resumed> )= 1 ([{fd=13, 
revents=POLLNVAL}])
[pid 19124] poll([{fd=28, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19120] poll([{fd=13, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19124] <... poll resumed> )= 1 ([{fd=28, 
revents=POLLNVAL}])
[pid 19120] <... poll resumed> )= 1 ([{fd=13, 
revents=POLLNVAL}])
[pid 19124] poll([{fd=28, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 
[pid 19120] poll([{fd=13, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Experience with large number of LXC/LXD containers

2017-04-06 Thread Tomasz Chmielewski

On 2017-04-07 06:41, Serge E. Hallyn wrote:


would you mind opening an issue for this at github.com/lxc/lxd/issues?
Just add in all the info you have and, if I understand right that you
can't put time into further reproductions, just say so up top so
hopefully we won't bug you too much.


Here it is:

https://github.com/lxc/lxd/issues/3159


I can try reproducing that if you have any ideas how to do it.

And/or, what exactly to run if it hangs again to get some more debugging 
- note I'll have to run it relatively quickly, then will have to restart 
the server - meaning, most likely no time for any interaction on the 
mailing list / github.



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? LXD 2.14 - Ubuntu 16.04 - kernel 4.4.0-57-generic - SWAP continuing to grow

2017-07-28 Thread Tomasz Chmielewski
Most likely your database cache is simply set too large.

I've been experiencing similar issues with MySQL  (please read in detail):

https://stackoverflow.com/questions/43259136/mysqld-out-of-memory-with-plenty-of-memory/43259820

It finally went away after I've been lowering MySQL cache by a few GBs from 
each OOM to OOM, until it stopped happenin

Tomasz Chmielewski
https://lxadm.com

On Saturday, July 15, 2017 18:36 JST, Saint Michael <vene...@gmail.com> wrote: 
 
> I have a lot of memory management issues using pure LXC. In my case, my box
> has only one container. I use LXC to be able to move my app around, not to
> squeeze performance out of hardware. What happens is my database gets
> killed the OOM manager, although there are gigabytes of RAM used for cache.
> The memory manager kills applications instead of reclaiming memory from
> disc cache. How can this be avoided?
> 
> My config at the host is:
> 
> vm.hugepages_treat_as_movable=0
> vm.hugetlb_shm_group=27
> vm.nr_hugepages=2500
> vm.nr_hugepages_mempolicy=2500
> vm.nr_overcommit_hugepages=0
> vm.overcommit_memory=0
> vm.swappiness=0
> vm.vfs_cache_pressure=150
> vm.dirty_ratio=10
> vm.dirty_background_ratio=5
> 
> This shows the issue
> [9449866.130270] Node 0 hugepages_total=1250 hugepages_free=1250
> hugepages_surp=0 hugepages_size=2048kB
> [9449866.130271] Node 1 hugepages_total=1250 hugepages_free=1248
> hugepages_surp=0 hugepages_size=2048kB
> [9449866.130271] 46181 total pagecache pages
> [9449866.130273] 33203 pages in swap cache
> [9449866.130274] Swap cache stats: add 248571542, delete 248538339, find
> 69031185/100062903
> [9449866.130274] Free swap  = 0kB
> [9449866.130275] Total swap = 8305660kB
> [9449866.130276] 20971279 pages RAM
> [9449866.130276] 0 pages HighMem/MovableOnly
> [9449866.130276] 348570 pages reserved
> [9449866.130277] 0 pages cma reserved
> [9449866.130277] 0 pages hwpoisoned
> [9449866.130278] [ pid ]   uid  tgid total_vm  rss nr_ptes nr_pmds
> swapents oom_score_adj name
> [9449866.130286] [  618] 0   61887181  135 168   3
>3 0 systemd-journal
> [9449866.130288] [  825] 0   82511343  130  25   3
>0 0 systemd-logind
> [9449866.130289] [  830] 0   830 1642   31   8   3
>0 0 mcelog
> [9449866.130290] [  832]   996   83226859   51  23   3
>   47 0 chronyd
> [9449866.130292] [  834] 0   834 4905  100  12   3
>0 0 irqbalance
> [9449866.130293] [  835] 0   835 6289  177  15   3
>0 0 smartd
> [9449866.130295] [  837]81   83728499  258  28   3
>  149  -900 dbus-daemon
> [9449866.130296] [  857] 0   857 1104   16   7   3
>0 0 rngd
> [9449866.130298] [  859] 0   859   19246337114 224   4
>  40630 0 NetworkManager
> [9449866.130300] [  916] 0   91625113  229  50   3
>0 -1000 sshd
> [9449866.130302] [  924] 0   924 6490   50  17   3
>0 0 atd
> [9449866.130303] [  929] 0   92935327  199  20   3
>  284 0 agetty
> [9449866.130305] [  955] 0   95522199 3185  43   3
>  312 0 dhclient
> [9449866.130307] [ 1167] 0  1167 6125   88  17   3
>2 0 lxc-autostart
> [9449866.130309] [ 1176] 0  117610818  275  24   3
>   38 0 systemd
> [9449866.130310] [ 1188] 0  118813303 1980  29   3
>   36 0 systemd-journal
> [9449866.130312] [ 1372]99  1372 38812  12   3
>   45 0 dnsmasq
> [9449866.130313] [ 1375]81  1375 6108   77  17   3
>   39  -900 dbus-daemon
> [9449866.130315] [ 1394] 0  1394 6175   46  15   3
>  168 0 systemd-logind
> [9449866.130316] [ 1395] 0  139578542 1142  69   3
>4 0 rsyslogd
> [9449866.130317] [ 1397] 0  1397 1614   32   8   3
>0 0 agetty
> [9449866.130319] [ 1398] 0  1398 1614   31   8   3
>0 0 agetty
> [9449866.130320] [ 1400] 0  1400 1614   31   8   3
>0 0 agetty
> [9449866.130321] [ 1401] 0  1401 16142   8   3
>   30 0 agetty
> [9449866.130322] [ 1402] 0  1402 16142   8   3
>   29 0 agetty
> [9449866.130324] [ 1403] 0  1403 1614   31   8   3
>0 0 agetty
> [9449866.13032

Re: [lxc-users] ?==?utf-8?q? ?==?utf-8?q? "lxc network create" error

2017-08-01 Thread Tomasz Chmielewski
On Tuesday, August 01, 2017 18:04 JST, Sjoerd <sjo...@sjomar.eu> wrote: 
 
> 
> 
> On 30-07-17 17:15, Tomasz Chmielewski wrote:
> > Bug or a feature?
> >
> > # lxc network create dev
> > error: Failed to run: ip link add dev type bridge: Error: either "dev" is 
> > duplicate, or "bridge" is a garbage.
> >
> >
> > # lxc network create devel
> > Network devel created
> >
> >
> I vote for feature, since dev is most likely a reserved word, since it's 
> short for device in routing terms.

Unless someone has i.e. "prod" and "dev" environments.
 
> i.e. setting routing can be done like : ip route add 192.168.10.0/24 via 
> 10.2.2.1 dev eth0

But that's a different command.


> So in you-re case the command would end like : dev dev ...I would be 
> confused by that as well ;)

If we treat it as a feature - then it's an undocumented feature. We would need 
documentation specifying a list of disallowed network names.
 
-- 
Tomasz Chmielewski 
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? OVS / GRE - guest-transparent mesh networking across multiple hosts

2017-08-03 Thread Tomasz Chmielewski
I think fan is single server only and / or won't cross different networks.

You may also take a look at https://www.tinc-vpn.org/

Tomasz
https://lxadm.com

On Thursday, August 03, 2017 20:51 JST, Félix Archambault 
 wrote: 
 
> Hi Amblard,
> 
> I have never used it, but this may be worth taking a look to solve your
> problem:
> 
> https://wiki.ubuntu.com/FanNetworking
> 
> On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" 
> wrote:
> 
> Hello,
> 
> I am deploying 10< bare metal servers to serve as hosts for containers
> managed through LXD.
> As the number of container grows, management of inter-container
> running on different hosts becomes difficult to manage and need to be
> streamlined.
> 
> The goal is to setup a 192.168.0.0/24 network over which containers
> could communicate regardless of their host. The solutions I looked at
> [1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of
> bridge.driver: openvswitch configuration for LXD.
> Note: baremetal servers are hosted on different physical networks and
> use of multicast was ruled out.
> 
> An illustration of the goal architecture is similar to the image visible on
> https://books.google.fr/books?id=vVMoDwAAQBAJ=PA168=
> 6aJRw15HSf=PA197#v=onepage=false
> Note: this extract is from a book about LXC, not LXD.
> 
> The point that is not clear is
> - whether each container needs to have as many veth as there are
> baremetal host, in which case [de]commission of a new baremetal would
> require configuration updated of all existing containers (and
> basically rule out this scenario)
> - or whether it is possible to "hide" this mesh network at the host
> level and have a single veth inside each container to access the
> private network and communicate with all the other containers
> regardless of their physical location and regardeless of the number of
> physical peers
> 
> Has anyone built such a setup?
> Does the OVS+GRE setup need to be build prior to LXD init or can LXD
> automate part of the setup?
> Online documentation is scarce on the topic so any help would be
> appreciated.
> 
> Regards,
> Amaury
> 
> [1] https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
> [2] https://stackoverflow.com/questions/39094971/want-to-use
> -the-vlan-feature-of-openvswitch-with-lxd-lxc
> [3] https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne
> tworking-on-ubuntu-16-04-lts/
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? lxc commands failing randomly:?=?==?utf-8?q? error: not foun

2017-07-12 Thread Tomasz Chmielewski
On Wednesday, July 12, 2017 20:33 JST, "Tomasz Chmielewski" wrote: 
 
> In the last days, lxc commands are failing randomly.
> 
> Example (used in a script):
> 
> # lxc config set TDv2-z-testing-a19ea62218-2017-07-12-11-23-03 raw.lxc 
> lxc.aa_allow_incomplete=1
> error: not found
> 
> It shouldn't fail, because:
> 
> # lxc list|grep TDv2-z-testing-a19ea62218-2017-07-12-11-23-03 
> | TDv2-z-testing-a19ea62218-2017-07-12-11-23-03| STOPPED |
> |   | 
> PERSISTENT | 0 |

To be more specific: it only seems to fail on "lxc config" command (lxc config 
set, lxc config edit, lxc config device add).

My container startup script work like below:

lxc copy container-template newcontainer
lxc config set newcontainer raw.lxc "lxc.aa_allow_incomplete=1"
...lots of lxc file pull / push...
lxc config show newcontainer | sed -e "s/$BRIDGE_OLD/$BRIDGE_NEW/" | lxc config 
edit newcontainer
...again, lots of lxc file pull / push...
lxc config device add newcontainer uploads disk source=/some/path 
path=/var/www/path

It only fails with "error: not found" on the first, second or third "lxc 
config" line.

It started to fail in the last 2 weeks I think (lxd updates?) - before, it was 
rock solid.



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc commands failing randomly:?==?utf-8?q? error: not found

2017-07-12 Thread Tomasz Chmielewski
In the last days, lxc commands are failing randomly.

Example (used in a script):

# lxc config set TDv2-z-testing-a19ea62218-2017-07-12-11-23-03 raw.lxc 
lxc.aa_allow_incomplete=1
error: not found

It shouldn't fail, because:

# lxc list|grep TDv2-z-testing-a19ea62218-2017-07-12-11-23-03 
| TDv2-z-testing-a19ea62218-2017-07-12-11-23-03| STOPPED |  
  |   | PERSISTENT 
| 0 |


lxc client a container and is using:

# dpkg -l|grep lxd
ii  lxd-client   2.15-0ubuntu6~ubuntu16.04.1~ppa1   
amd64Container hypervisor based on LXC - client



lxd server is using:

# dpkg -l|grep lxd
ii  lxd   2.15-0ubuntu6~ubuntu16.04.1~ppa1amd64 
   Container hypervisor based on LXC - daemon
ii  lxd-client2.15-0ubuntu6~ubuntu16.04.1~ppa1amd64 
   Container hypervisor based on LXC - client


Not sure how I can debug this.

-- 
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? ?==?utf-8?q? ?= lxc commands failing randomly:?(null)

2017-07-12 Thread Tomasz Chmielewski
On Thursday, July 13, 2017 00:35 JST, "Tomasz Chmielewski" <man...@wpkg.org> 
wrote: 
 
> On Wednesday, July 12, 2017 20:52 JST, "Tomasz Chmielewski" <man...@wpkg.org> 
> wrote: 
> 
> > It only fails with "error: not found" on the first, second or third "lxc 
> > config" line.
> > 
> > It started to fail in the last 2 weeks I think (lxd updates?) - before, it 
> > was rock solid.
> 
> Also "lxc exec" fails.
> 
> Here is a reproducer:
> 
> # lxc copy base-uni-web01 ztest
> # lxc start ztest ; while true ; do OUT=$(lxc exec ztest date ; echo $?) ; 
> echo $OUT ; done
> Wed Jul 12 15:32:43 UTC 2017 0
> Wed Jul 12 15:32:54 UTC 2017 0
> Wed Jul 12 15:32:54 UTC 2017 0
> Wed Jul 12 15:32:55 UTC 2017 0
> Wed Jul 12 15:32:55 UTC 2017 0
> Wed Jul 12 15:32:55 UTC 2017 0
> Wed Jul 12 15:32:56 UTC 2017 0
> Wed Jul 12 15:32:56 UTC 2017 0
> Wed Jul 12 15:32:57 UTC 2017 0
> Wed Jul 12 15:32:57 UTC 2017 0
> Wed Jul 12 15:32:57 UTC 2017 0
> Wed Jul 12 15:32:58 UTC 2017 0
> error: not found
> Wed Jul 12 15:32:58 UTC 2017 1
> Wed Jul 12 15:33:09 UTC 2017 0
> Wed Jul 12 15:33:09 UTC 2017 0
> Wed Jul 12 15:33:19 UTC 2017 0
> Wed Jul 12 15:33:25 UTC 2017 0
> Wed Jul 12 15:33:40 UTC 2017 0
> Wed Jul 12 15:33:46 UTC 2017 0
> 
> 
> It seems to be easier to reproduce if the host server is slightly overloaded.

Also - it only happens when lxc remote is https://.

It doesn't happen when lxc remote is unix://


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? ?==?utf-8?q? ?= lxc commands failing randomly:?=?==?utf-8?q? error: not fou

2017-07-12 Thread Tomasz Chmielewski
On Wednesday, July 12, 2017 20:52 JST, "Tomasz Chmielewski" <man...@wpkg.org> 
wrote: 

> It only fails with "error: not found" on the first, second or third "lxc 
> config" line.
> 
> It started to fail in the last 2 weeks I think (lxd updates?) - before, it 
> was rock solid.

Also "lxc exec" fails.

Here is a reproducer:

# lxc copy base-uni-web01 ztest
# lxc start ztest ; while true ; do OUT=$(lxc exec ztest date ; echo $?) ; echo 
$OUT ; done
Wed Jul 12 15:32:43 UTC 2017 0
Wed Jul 12 15:32:54 UTC 2017 0
Wed Jul 12 15:32:54 UTC 2017 0
Wed Jul 12 15:32:55 UTC 2017 0
Wed Jul 12 15:32:55 UTC 2017 0
Wed Jul 12 15:32:55 UTC 2017 0
Wed Jul 12 15:32:56 UTC 2017 0
Wed Jul 12 15:32:56 UTC 2017 0
Wed Jul 12 15:32:57 UTC 2017 0
Wed Jul 12 15:32:57 UTC 2017 0
Wed Jul 12 15:32:57 UTC 2017 0
Wed Jul 12 15:32:58 UTC 2017 0
error: not found
Wed Jul 12 15:32:58 UTC 2017 1
Wed Jul 12 15:33:09 UTC 2017 0
Wed Jul 12 15:33:09 UTC 2017 0
Wed Jul 12 15:33:19 UTC 2017 0
Wed Jul 12 15:33:25 UTC 2017 0
Wed Jul 12 15:33:40 UTC 2017 0
Wed Jul 12 15:33:46 UTC 2017 0


It seems to be easier to reproduce if the host server is slightly overloaded.

Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? ?==?utf-8?q? ?= lxc commands failing randomly:?(null)

2017-07-12 Thread Tomasz Chmielewski
\"2017-07-12T21:24:21.01927786Z\",\n\t\t\"updated_at\": 
\"2017-07-12T21:24:21.01927786Z\",\n\t\t\"status\": 
\"Running\",\n\t\t\"status_code\": 103,\n\t\t\"resources\": 
{\n\t\t\t\"containers\": 
[\n\t\t\t\t\"/1.0/containers/vpn-hz1\"\n\t\t\t]\n\t\t},\n\t\t\"metadata\": 
{\n\t\t\t\"fds\": {\n\t\t\t\t\"0\": 
\"05693ff9baedf120f482bf513e13956b214536d47629e5b40c635d0037e8bd27\",\n\t\t\t\t\"1\":
 
\"a59868283645f18a144eb671d15768681dc577b4d2ca1163cb4c65454ce6192e\",\n\t\t\t\t\"2\":
 
\"707ef10a6025752a4e89dcf24c26ac1ef2962492660b71e4ab1e51fb073c4939\",\n\t\t\t\t\"control\":
 
\"14b79db9b5ccd0224201c0e22cd35b2420a236ffb080992ec46c2488b36a3314\"\n\t\t\t}\n\t\t},\n\t\t\"may_cancel\":
 false,\n\t\t\"err\": \"\"\n\t}" t=2017-07-12T21:24:21+
lvl=dbug msg="Connected to the websocket" t=2017-07-12T21:24:21+
lvl=dbug msg="Connected to the websocket" t=2017-07-12T21:24:21+
lvl=dbug msg="Connected to the websocket" t=2017-07-12T21:24:21+
etag= lvl=info method=GET msg="Sending request to LXD" 
t=2017-07-12T21:24:21+ 
url=https://lxd-server:8443/1.0/operations/c2148e6d-bb82-4d8f-aafa-0a2a5b7c9fa0
lvl=dbug msg="got message barrier" t=2017-07-12T21:24:21+
lvl=dbug msg="got message barrier" t=2017-07-12T21:24:21+
error: not found

Wed Jul 12 21:24:21 UTC 2017 1


On Thursday, July 13, 2017 04:55 JST, Ivan Kurnosov <zer...@zerkms.ru> wrote: 
 
> Please run it with `--debug` for more details.
> 
> On 13 July 2017 at 03:42, Tomasz Chmielewski <man...@wpkg.org> wrote:
> 
> > On Thursday, July 13, 2017 00:35 JST, "Tomasz Chmielewski" <
> > man...@wpkg.org> wrote:
> >
> > > On Wednesday, July 12, 2017 20:52 JST, "Tomasz Chmielewski" <
> > man...@wpkg.org> wrote:
> > >
> > > > It only fails with "error: not found" on the first, second or third
> > "lxc config" line.
> > > >
> > > > It started to fail in the last 2 weeks I think (lxd updates?) -
> > before, it was rock solid.
> > >
> > > Also "lxc exec" fails.
> > >
> > > Here is a reproducer:
> > >
> > > # lxc copy base-uni-web01 ztest
> > > # lxc start ztest ; while true ; do OUT=$(lxc exec ztest date ; echo $?)
> > ; echo $OUT ; done
> > > Wed Jul 12 15:32:43 UTC 2017 0
> > > Wed Jul 12 15:32:54 UTC 2017 0
> > > Wed Jul 12 15:32:54 UTC 2017 0
> > > Wed Jul 12 15:32:55 UTC 2017 0
> > > Wed Jul 12 15:32:55 UTC 2017 0
> > > Wed Jul 12 15:32:55 UTC 2017 0
> > > Wed Jul 12 15:32:56 UTC 2017 0
> > > Wed Jul 12 15:32:56 UTC 2017 0
> > > Wed Jul 12 15:32:57 UTC 2017 0
> > > Wed Jul 12 15:32:57 UTC 2017 0
> > > Wed Jul 12 15:32:57 UTC 2017 0
> > > Wed Jul 12 15:32:58 UTC 2017 0
> > > error: not found
> > > Wed Jul 12 15:32:58 UTC 2017 1
> > > Wed Jul 12 15:33:09 UTC 2017 0
> > > Wed Jul 12 15:33:09 UTC 2017 0
> > > Wed Jul 12 15:33:19 UTC 2017 0
> > > Wed Jul 12 15:33:25 UTC 2017 0
> > > Wed Jul 12 15:33:40 UTC 2017 0
> > > Wed Jul 12 15:33:46 UTC 2017 0
> > >
> > >
> > > It seems to be easier to reproduce if the host server is slightly
> > overloaded.
> >
> > Also - it only happens when lxc remote is https://.
> >
> > It doesn't happen when lxc remote is unix://
> >
> >
> > Tomasz Chmielewski
> > https://lxadm.com
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> 
> 
> 
> -- 
> With best regards, Ivan Kurnosov
 
 
 
-- 
Tomasz Chmielewski
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? lxd 2.15 broke "lxc file push -r"?

2017-06-29 Thread Tomasz Chmielewski
On Thursday, June 29, 2017 13:30 JST, Stéphane Graber <stgra...@ubuntu.com> 
wrote: 
 
> Hmm, so that just proves that we really more systematic testing of the
> way we handle all that mess in "lxc file push".
> 
> For this particular case, my feeling is that LXD 2.5's behavior is
> correct. 

OK, I can see we can restore "2.14 behaviour" with an asterisk, i.e.:

lxc file push -r /tmp/testdir/* container/tmp

Which makes "lxc file push -r" in 2.15 behave similar like cp.


Before, 2.14 behaved similar like rsync.


So can we assume, going forward, the behaviour won't change anymore? :)


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxd 2.15 broke "lxc file push -r"?

2017-06-28 Thread Tomasz Chmielewski
With lxd 2.14:

# mkdir /tmp/testdir
# touch /tmp/testdir/file1 /tmp/testdir/file2
# lxc file push -r /tmp/testdir/ testvm1/tmp # < note the trailing 
slash after /tmp/testdir/
# echo $?
0
# lxc exec testvm1 ls /tmp
file1  file2


With lxd 2.15:

# mkdir /tmp/testdir
# touch /tmp/testdir/file1 /tmp/testdir/file2
# lxc file push -r /tmp/testdir/ testvm2/tmp # < note the trailing 
slash after /tmp/testdir/
# lxc exec testvm2 ls /tmp
testdir
# lxc exec testvm2 ls /tmp/testdir
file1  file2


This breaks many scripts!

-- 
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

  1   2   >