.
That's a weird advice, given that lxc are Linux containers, and ZFS is
not in Linux kernel (I know that there are some 3rd party porting
attempts, but it's not really applicable in many situations).
--
Tomasz Chmielewski
http://www.sslrack.com
___
lxc
that the
container root filesystem will be a directory under
/var/lib/lxc/container/rootfs.
How can I do the same with lxd (lxc command)? It seems to default to
dir.
# lxc launch images:ubuntu/trusty/amd64 test-container
--
Tomasz Chmielewski
http://wpkg.org
point me to the bug filing system for linuxcontainers.org?
The closest to contributing seems to be here:
https://linuxcontainers.org/lxd/contribute/
but don't see any report an bug, issue tracker or anything similar.
--
Tomasz Chmielewski
http://wpkg.org
equivalent for lxd, where this could be set?).
However, the container is already created in a directory, so I don't
think the above error matters:
# btrfs sub list /srv|grep lxd
# btrfs sub list /srv|grep test-image
#
--
Tomasz Chmielewski
http://wpkg.org
On 2015-06-03 15:01, Tomasz Chmielewski wrote:
I'm trying to start an unprivileged container on Ubuntu 14.04;
unfortunately, the kernel crashes.
# lxc-create -t download -n test-container
(...)
# lxc-start -n test-container -F
Kernel crashes at this point.
It does not crash if I start
It may be worth trying, but it won't work reliably for most kernel
crashes (network, disk IO etc. may crash as well).
--
Tomasz Chmielewski
http://wpkg.org
On 2015-06-10 14:11, Christoph Lehmann wrote:
As a side note, you can use rsyslogs remotelogging to get the oops
Am 3. Juni 2015 08:01
.
Is there something similar for lxc?
--
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
Really not possible? How do people run debootstrap, pbuilder? These
tools are often parts of build systems, am I really the first one to try
to run them in LXC?
Tomasz Chmielewski
http://wpkg.org
On 2015-07-01 17:22, Janjaap Bos wrote:
You cannot create devices from the container. You need
Trying to add a remote server:
# lxc remote add server02 https://server02:8443
Admin password for server02:
What is the remote password, and where do I set it? man lxc is not too
helpful here.
--
Tomasz Chmielewski
http://wpkg.org
___
lxc-users
On 2015-08-05 17:57, Tomasz Chmielewski wrote:
Trying to add a remote server:
# lxc remote add server02 https://server02:8443
Admin password for server02:
What is the remote password, and where do I set it? man lxc is not
too helpful here.
Sorry for the noise - found it:
# lxc config set
ed on each
container
start, as is the apparmor policy. The contents of the 'lxc.raw' config
item are appended to the auto-generated config.
Quoting Tomasz Chmielewski (man...@wpkg.org):
I get the following when starting a container with lxd:
Incomplete AppArmor support in your kernel
If you rea
On 2015-10-27 23:36, Serge Hallyn wrote:
Quoting Tomasz Chmielewski (man...@wpkg.org):
Thanks, it worked.
How do I set other "lxc-style" values in lxd, like for example:
lxc.network.ipv4 = 10.0.12.2/24
lxc.network.ipv4.gateway = 10.0.12.1
lxc.network.ipv6 = ::333
, that one is bad! Can you open an issue for that?
Added:
https://github.com/lxc/lxd/issues/1246
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
n for lxd containers is auto-generated on each
container
start, as is the apparmor policy. The contents of the 'lxc.raw'
config
item are appended to the auto-generated config.
Quoting Tomasz Chmielewski (man...@wpkg.org):
I get the following when starting a container with lxd:
Incom
a "config"
file, like with lxc. Is it "metadata.yaml"? If so - how to set it there?
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
privileged lxc
containers?
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
"iptables-save" executed by non-root user
(in non-container).
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
m012d testvm13d
462Mtestvm012d
11M testvm13d
So it must be lxc-destroy.
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
On 2015-11-11 07:28, Serge Hallyn wrote:
Hi,
as I think was mentioned elsewhere I suspect this is a bug in the clone
code.
Could you open a github issue at github.com/lxc/lxc/issues and assign
it to
me?
Added:
https://github.com/lxc/lxc/issues/694
--
Tomasz Chmielewski
http://wpkg.org
or
not.)
Looks like lxc-clone should copy the config file at the very end, after
rootfs.
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
ntu14.04.1~ppa1 amd64Linux Containers
userspace tools (Python 3.x bindings)
# uname -a
Linux srv7 4.3.3-040303-generic #201512150130 SMP Tue Dec 15 06:32:30
UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Tomasz Chmielewski
http://wpkg.org/
___
Not sure what's the procedure for this one:
# lxc list
error: Get https://10.0.0.1:8443/1.0/containers?recursion=1: x509:
certificate has expired or is not yet valid
?
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users
omated starting / removal of containers etc.)
- could not find anything about changing the cert in LXD docs, so it was
a bit of a problem working out why it doesn't work anymore and how to
fix it
The whole process could be designed a bit better :)
Tomasz Chmielewski
On 2016-06-02 21:09, Tomasz Chmielewski wrote:
Not sure what's the procedure for this one:
# lxc list
error: Get https://10.0.0.1:8443/1.0/containers?recursion=1: x509:
certificate has expired or is not yet valid
Apparently LXD sets up a certificate with 1 year validity when
installed
sort of
funny ELF headers that criu doesn't quite understand.
This is pretty much standard Ubuntu 16.04. The only running binary which
is out of repositories is nginx (from upstream nginx repo).
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing
) Error (cr-dump.c:1600): Dumping FAILED.
Expected?
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
set it on the host (on the host http_proxy is empty) nor in the
container.
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
rs, as all
earlier kernels are not stable with btrfs.
Could be it's not compatible?
Do you still want me to report the issue?
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcont
# lxc snapshot odoo10 "2016-01-10 23:26"
# lxc delete "odoo10/2016-01-10 23:26"
error: unknown remote name: "odoo10/2016-01-10 23"
# lxc delete "odoo10/2016-01-10 23\:26"
error: unknown remote name: "odoo10/2016-01-10 23\\"
Tomasz Chmielewski
htt
Is there a way to run the command in the background, when running "lxc
exec"?
It doesn't seem to work for me.
# lxc exec container -- sleep 2h &
[2] 13566
#
[2]+ Stopped lxc exec container -- sleep 2h
This also doesn't work:
# lxc exec container -- "sle
ot;status_code":200,"metadata":["/1.0"],"operation":""}
Do you have any ideas to debug this?
The same executed directly on the host always work fine.
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
e
a
<-- extra end of line
<-- extra end of line
container#
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
Doing the following seems to help:
# service lxcfs stop
# service lxcfs start
Then, I'm able to manually start lxc and lxd containers.
Tomasz
On 2016-02-26 09:04, Tomasz Chmielewski wrote:
None of my lxc nor lxd container start after the most recent lxc/lxd
update.
Any clues?
# lxc
4-19-20-55:~#
Tomasz
On 2016-02-26 09:16, Tomasz Chmielewski wrote:
Doing the following seems to help:
# service lxcfs stop
# service lxcfs start
Then, I'm able to manually start lxc and lxd containers.
Tomasz
On 2016-02-26 09:04, Tomasz Chmielewski wrote:
None of my lxc nor lxd container s
2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container
hypervisor based on LXC - extra tools
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http
On 2016-01-20 02:04, Serge Hallyn wrote:
Quoting Tomasz Chmielewski (man...@wpkg.org):
Can lxc restore a snapshot as a new container?
Let's say I have a container named "container1" and make a snapshot
called "test1":
# lxc snapshot container1 "test1"
H
changed file pull/push a little while ago to work against stopped
containers too, clearly I forgot to update the documentation :)
Excellent!
A pull request would be appreciated, otherwise I'll try to remember to
fix this next time I look at the specs.
I would if I knew how!
Tomasz Chmielewski
http
7-0ubuntu2~ubuntu14.04.1~ppa1 amd64
[1]
https://github.com/lxc/lxd/blob/master/specs/command-line-user-experience.md#file
Tomasz Chmielewski
http://wpkg.org/
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/li
On 2016-01-25 22:19, Tomasz Chmielewski wrote:
Let's say I have a container named "container1" and make a snapshot
called "test1":
# lxc snapshot container1 "test1"
How would I restore it as a new container, called "container1-test"?
lxc copy cont
te=0" should not
protect snapshots, if named explicitely, i.e. "lxc delete
containername/snapshot"
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
get exited with error
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
Can lxc restore a snapshot as a new container?
Let's say I have a container named "container1" and make a snapshot
called "test1":
# lxc snapshot container1 "test1"
How would I restore it as a new container, called "container1-test&quo
Something like this reproduces it for me reliably (hangs on the first or
second "stop"):
while true; do
echo stop
time lxc stop containername --debug
sleep 5
echo start
lxc start containername
done
Tomasz
On 2016-03-11 01:35, Tomasz Chmielewski wrote:
Am I th
hypervisor
based on LXC - client
ii lxd-tools
2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor
based on LXC - extra tools
"lxc restart containername" mostly just hangs.
Tomasz
On 2016-03-09 17:53, Tomasz Chmielewski wro
eal0m5.520s
user0m0.044s
sys 0m0.004s
restart
real0m49.382s
user0m0.052s
sys 0m0.000s
restart
real0m33.426s
user0m0.048s
sys 0m0.000s
restart
...hangs here...
Tomasz
On 2016-03-11 02:23, Tomasz Chmielewski wrote:
Something like this reproduces it for me reliably
t;type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"name":"z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26","profiles":["default"],"stateful":false,"status":"Running","status_code":103}}
DBUG[03-09|08:50:05] Putting
{"action":"stop","force":false,"stateful":false,"timeout":-1}
to
http://unix.socket/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26/state
DBUG[03-09|08:50:05] Raw response: {"type":"async","status":"Operation
created","status_code":100,"metadata":{"id":"818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d","class":"task","created_at":"2016-03-09T08:50:05.465171729Z","updated_at":"2016-03-09T08:50:05.465171729Z","status":"Running","status_code":103,"resources":{"containers":["/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26"]},"metadata":null,"may_cancel":false,"err":""},"operation":"/1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d"}
DBUG[03-09|08:50:05]
1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d/wait
Just sits and hangs here.
Is there any quick fix for that?
Other than that - do you have any system which checks basic
functionality before pushing the packages to general public? Seems we
had lots of bugs making lxd unusable lately.
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
ve erratic).
Is it intended change?
Tomasz Chmielewski
http://wpkg.org
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
the database dir, copy the files in
Other than that - works fine, snapshots are very useful.
It's hard to me to say what's "more stable" on Linux (btrfs or zfs); my
bets would be btrfs getting more attention in the coming year, as it's
getting its remaining bugs fixed.
Tomasz Chmiel
On 2016-06-30 18:55, Sjoerd wrote:
On 30/06/2016 11:17, Tomasz Chmielewski wrote:
Please note that btrfs is not a stable filesystem, at least not in the
latest Ubuntu (16.04).
You may have "out of space" errors with them, especially when doing
snapshots.
kernels 4.6.x[1] beh
untu, from
http://kernel.ubuntu.com/~kernel-ppa/mainline/
Tomasz Chmielewski
http://wpkg.org/
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
Group memory swap accounting is disabled, swap limits will be
ignored."
Jan 31 07:28:37 lxd01 systemd[1]: Started LXD - main daemon.
Tomasz
On 2017-01-31 16:17, Tomasz Chmielewski wrote:
lxd process on one of my servers started to hang a few days ago.
In syslog, I can see the follo
xd/lxd/operations.go:137
+0x127
Jan 31 06:46:06 lxd01 lxd[21955]: error: LXD still not running after
600s timeout.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
2017-01-31 16:29, Tomasz Chmielewski wrote:
I think it may be related to
https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
I have a docker container, with several dockers inside, and with lxd
snapshots.
Doing this:
# lxc delete docker
error: Get
http://unix.socket/1.0/operation
at: https://dl.stgraber.org/lxd-2.0.8-btrfs
SHA256:
4d9a7ef7c4635d7dd6c3e41f0eb1a3db12d38a8148b3940aa801c7355510e815
Stéphane
On Wed, Feb 01, 2017 at 03:19:44PM +0900, Tomasz Chmielewski wrote:
Unfortunately it's still crashing, around 1 day after removing the
docker
container:
Feb 1 06:16:20
On 2017-02-03 12:52, Tomasz Chmielewski wrote:
Suddenly, today, I'm not able to stop or reboot any of containers:
# lxc stop some-container
Just sits there forever.
In /var/log/lxd/lxd.log, only this single entry shows up:
t=2017-02-03T03:46:20+ lvl=info msg="Shutting down cont
FUSE based filesystem for LXC
# uname -a
Linux lxd01 4.9.0-040900-generic #201612111631 SMP Sun Dec 11 21:33:00
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Any clues how to fix this?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive
response: Connection reset by peer.
How can it be debugged?
Tomasz
On 2017-02-03 13:05, Tomasz Chmielewski wrote:
On 2017-02-03 12:52, Tomasz Chmielewski wrote:
Suddenly, today, I'm not able to stop or reboot any of containe
itial_deploy/rootfs/etc/nginx/conf.d':
Value too large for defined data type
"/vagrant/" is a bind-mount directory within LXD:
vagrant:
path: /vagrant
source: /home/vagrantvm/Desktop/vagrant
type: disk
Anyone else seeing this?
The files are copied, just the warning is a
04.
"cp" is copying config files.
They are a few hundred bytes in size, few kilobytes maximum.
So must be something else.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
On 2016-10-05 00:41, Tomasz Chmielewski wrote:
On 2016-10-05 00:05, Michael Peek wrote:
I could be completely wrong about everything, but here's what I think
is
going on:
If I'm correct then the version of cp you have inside the container
was
compiled without large file support enabled
ainer-name
MOUNTNAME=something
MOUNTPATH=/mnt/on/the/host
CONTAINERPATH=/mnt/inside/the/container
lxc config device add $CONTAINER $MOUNTNAME disk source=$MOUNTPATH
path=$CONTAINERPATH
Tomasz Chmielewski
https://lxadm.com
___
lxc-users ma
LXD1-LXD3, LXD2-LXD3) and constantly adjusting the
routes as containers are stopped/started/migrated, it's a bit of a
management nightmare. And even more so if the number of LXD servers
grows.
Hints, discussion?
Tomasz Chmielewski
https://lxadm.com
___
On 2016-09-18 21:05, Sergiusz Pawlowicz wrote:
On Sun, Sep 18, 2016 at 4:16 PM, Tomasz Chmielewski <man...@wpkg.org>
wrote:
While I can imagine setting up many OpenVPN tunnels between all LXD
servers
I cannot imagine that :-) :-)
Use tinc, mate. Your life begins :-)
https://ww
containers on every of
these servers - but their "LAN" will be limited per each LXD server
(unless we do some special tricks).
Some other hostings offer a public IP, or several public IPs per
servers, in the same datacentre, but again, no LAN.
Tomasz
r than any userspace programs).
On the other hand, it provides no encryption between LXD servers (or, in
fact, any other virtualization), so may depend on your exact
requirements.
Tomasz Chmielewski
___
lxc-users mailing list
in LXD 2.0,
but I'll likely be posting something about the new network stuff in the
next few weeks.
"cross-host tunnels with GRE or VXLAN"
Interesting!
Will it be limited to 2 LXD servers only, or will it allow an arbitrary
number of LXD servers (2, 3, 4 and more)?
Toma
ticast... so that's not going to work for most of people with popular
server hostings, since they don't offer multicast.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
Debian stretch (currently the "testing"
distribution), 64bit kernel 4.7.4-grsec
Might be 4.7.x kernel.
There was some OOM regression in 4.7.x, but I'm not sure if it was fixed
in 4.7.4 or not.
Tomasz Chmielewski
https://lxadm.com
___
lxc-use
containers
are stopped/started.
But of course I don't like this kind of fix.
Is anyone else seeing this too? Any better workaround that constant
arping from all affected containers?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
ZFS is not a distributed filesystem.
So the only way to do what you want is to use DRBD, and ZFS on top of
it.
Tomasz Chmielewski
https://lxadm.com
On 2016-11-03 22:42, Benoit GEORGELIN - Association Web4all wrote:
Thanks, looks like nobody use LXD in a cluster
Cordialement,
Benoît
A is in maintenance or crashed.
There is a lot of distributed file system (gluster, ceph, beegfs,
swift etc..) but I my case, I like using ZFS with LXD and I would
like to try to keep that possibility .
If you want to stick with ZFS, then your only option is setting up DRBD.
Tomasz Chmielewski
https://lxadm.com
chattr +C
only works on directories and empty files; for existing files, you'll
have to move them out and copy back in)
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.or
6.2
# lxc --version
2.0.5
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
;.
I've tried setting DefaultBlockIOAccounting=yes in
/etc/systemd/system.conf, but it doesn't change anything (even after
systemd reload, system restart).
Any hints?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
On 2016-12-22 11:56, Tomasz Chmielewski wrote:
Ubuntu 16.04 hosts / containers and gluster 3.8:
# gluster volume create storage replica 2 transport tcp
serv1:/gluster/data serv2:/gluster/data force
volume create: storage: failed: Glusterfs is not supported on brick:
serv1:/gluster/data.
Setting
file
setfattr: file: Operation not permitted
Anyone managed to run glusterfs on LXD?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
picking straws and
would like to hear from others that probably has more experience than
me.
NFS for LXD and iSCSI for KVM?
Just don't use NFS clients using NFS server on the same machine (same
kernel), as this will break (hangs).
Tomasz Chmielewski
https://lxadm.com
(no recursive find etc.).
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
lxd 2.0.9-0ubuntu1~16.04.2
amd64Container hypervisor based on LXC - daemon
ii lxd-client 2.0.9-0ubuntu1~16.04.2
amd64Container hypervisor based on LXC - client
What step did I miss?
Tomasz Chmielewski
On 2017-03-23 13:00, Tomasz Chmielewski wrote:
I've just launched a Ubuntu 16.04 instance on EC2.
Unfortunately, LXD containers are not getting an IP address.
(...)
What step did I miss?
I see that I've run "dpkg-reconfigure -p medium lxd" too late (after the
container w
ver need some manual (or scripted) configuration, will stay
even if the container is stopped/removed (unless some more
configuration/scripting is done etc.).
Does LXD have any built-in mechanism to "redirect ports"? Or, what would
be the preferred way to do it?
Tomasz Chmiel
ver restart.
There is also no clear way to reproduce it reliably (other than running
the server for long, and starting/stopping a large number of containers
over that time...).
I think it's some kernel issue, but unfortunately I was not able to
debug this
your ssh key for pikachu
Then, connect with ssh -X:
ssh -X container_IP
export MOZ_NO_REMOTE=1 ; firefox
MOZ_NO_REMOTE=1 in the container is needed in case you run Firefox both
locally and over SSH - otherwise, it won't be possible to start two
separate Firefox instances.
Tomasz Chmielews
, stop it,
then replace "rootfs" with the one from the source system. But it also
doesn't look "nice enough".
And, we loose any container settings set with all above approaches.
Tomasz Chmielewski
https://lxadm.com
On 2017-04-13 21:28, Ron Kelley wrote:
Wha
ve:
- tar a selected container
- copy it via SSH somewhere
- restore at some later point in time somewhere else, on a different,
unrelated LXD server
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
the recommended approach to deal with it?
Adding LXD rules to iptables-persistent?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
It's just an example, like 'foo'.
Tomasz Chmielewski
https://lxadm.com
On 2017-04-16 09:54, gunnar.wagner wrote:
why does is have to be ubuntu:pokemon? why not any OS ? or is
'pokemon' just a placeholder like 'foo'
On 4/15/2017 11:29 PM, Matlink wrote:
Well, the lxc 1.0 version didn't
ctl restart lxd" should fix
things without interrupting your containers, but we'd definitely like
to
figure this one out if we can.
Yes, restarting lxd fixed it.
stracing different threads was showing a similar output to what I've
pasted before. Stuck in some kind of loop?
Toma
R|POLLHUP|0x2000}], 1, -1
[pid 19124] <... poll resumed> )= 1 ([{fd=28,
revents=POLLNVAL}])
[pid 19120] <... poll resumed> )= 1 ([{fd=13,
revents=POLLNVAL}])
[pid 19124] poll([{fd=28,
events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1
[pid 19120] poll([{fd=13,
even
no time for any interaction on the
mailing list / github.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
GBs from
each OOM to OOM, until it stopped happenin
Tomasz Chmielewski
https://lxadm.com
On Saturday, July 15, 2017 18:36 JST, Saint Michael <vene...@gmail.com> wrote:
> I have a lot of memory management issues using pure LXC. In my case, my box
> has only one container. I use LX
On Tuesday, August 01, 2017 18:04 JST, Sjoerd <sjo...@sjomar.eu> wrote:
>
>
> On 30-07-17 17:15, Tomasz Chmielewski wrote:
> > Bug or a feature?
> >
> > # lxc network create dev
> > error: Failed to run: ip link add dev type bridge: Error: eit
I think fan is single server only and / or won't cross different networks.
You may also take a look at https://www.tinc-vpn.org/
Tomasz
https://lxadm.com
On Thursday, August 03, 2017 20:51 JST, Félix Archambault
wrote:
> Hi Amblard,
>
> I have never used it, but
On Wednesday, July 12, 2017 20:33 JST, "Tomasz Chmielewski" wrote:
> In the last days, lxc commands are failing randomly.
>
> Example (used in a script):
>
> # lxc config set TDv2-z-testing-a19ea62218-2017-07-12-11-23-03 raw.lxc
> lxc.aa_allow_inc
~ubuntu16.04.1~ppa1amd64
Container hypervisor based on LXC - client
Not sure how I can debug this.
--
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc
On Thursday, July 13, 2017 00:35 JST, "Tomasz Chmielewski" <man...@wpkg.org>
wrote:
> On Wednesday, July 12, 2017 20:52 JST, "Tomasz Chmielewski" <man...@wpkg.org>
> wrote:
>
> > It only fails with "error: not found" on the first, se
On Wednesday, July 12, 2017 20:52 JST, "Tomasz Chmielewski" <man...@wpkg.org>
wrote:
> It only fails with "error: not found" on the first, second or third "lxc
> config" line.
>
> It started to fail in the last 2 weeks I think (lxd updates?)
4:21.01927786Z\",\n\t\t\"status\":
\"Running\",\n\t\t\"status_code\": 103,\n\t\t\"resources\":
{\n\t\t\t\"containers\":
[\n\t\t\t\t\"/1.0/containers/vpn-hz1\"\n\t\t\t]\n\t\t},\n\t\t\"metadata\":
{\n\t\t\t\&quo
, going forward, the behaviour won't change anymore? :)
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
dir/file1 /tmp/testdir/file2
# lxc file push -r /tmp/testdir/ testvm2/tmp # < note the trailing
slash after /tmp/testdir/
# lxc exec testvm2 ls /tmp
testdir
# lxc exec testvm2 ls /tmp/testdir
file1 file2
This breaks many scripts!
--
Tomasz Chmielewski
https://lxa
1 - 100 of 191 matches
Mail list logo