Hi, is anyone aware of any project with bindings for PHP?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
On 08/09/14 23:13, Leandro Ferreira wrote:
Hi, is anyone aware of any project with bindings for PHP?
You can use libvirt php bindings
Yep thanks. ATM I can't get it to compile on Ubuntu 14.10 and if I get
past that hurdle there doesn't seem to be much documentation to help
figure out how to
This is probably dumb and asked a 1000 times before but I am totally
confused by various tutorials that are either to old or do not apply
to exactly my particular needs. For the purpose of ongoing/endless
testing I'd like containers to be given an IP in the same network
range via DHCP and my WIFI
On 22/10/14 12:27, CDR wrote:
https://github.com/robinmonjo/dlrootfs
Heh, this just made LXC twice as useful in one feel swoop and now
I have some reason to use my docker hub account! Many thanks.
___
lxc-users mailing list
https://linuxcontainers.org/lxd/getting-started-cli/ says...
To pull a file from the container, use:
lxc file pull /etc/hosts .
To push one, use:
lxc file push hosts /tmp
but how can lxc file push|pull know what container to operate on?
This does not seem to clarify the issue...
~ lxc help
On 18/03/15 16:20, Guido Jäkel wrote:
lxc.cgroup.cpuset.cpus = 7
is this proven to work as intended?
[...]
your example would mean to select the single core #7.
That is intended for testing. I find that the last cpu seems to be
the least used (from simple top tests) so I figure it's a
Ubuntu vivid w/ lxd-daily.
When setting up unprivileged containers via lxc-create these cgroup
settings come from ~/.config/lxc/default.conf and for the most part
seem to work.
lxc.cgroup.memory.soft_limit_in_bytes = 256M
lxc.cgroup.memory.limit_in_bytes = 512M
I've tried to see if btrfs gquota applied to the btrfs subvolumes
automatically created with both lxc-create and lxc create would
be reflected within a container via df but no such luck yet. I'm
just wondering if I am doing this right and/or any hints I may be
missing?
Or is this another thing
On 21/03/15 01:27, Serge Hallyn wrote:
Since the past 2 days of updates I notice my lxc and lxd containers just hang
when trying to shut them down, consequently rebooting my laptop requires a
hard power button reset (so far btrfs on a SSD has survived). Also my main
system privileged container
On 22/03/15 02:12, Michael H. Warfield wrote:
Append linux to the end of those lines in each file so it reads like
this (for tty1.conf)
exec /sbin/getty -8 38400 tty1 linux
Wonderful thorough explanation, thanks Mike.
FWIW it's also possible to do something like this too (not documented in
On 29/04/15 17:17, Fırat KÜÇÜK wrote:
lxc.cgroup.memory.limit_in_bytes = 2048M
but my container free -h output shows 32GB
Is there anything that i missed?
For me on ubuntu I had to add this to my default grub line and reboot...
~ grep cgroup /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT=quiet
I thought I'd try going back to normal privileged containers which will at
least (or did pre-systemd) autostart. The only change from defaults is my
own br0 to put the containers on my local network...
~ grep br0 /etc/lxc/*
/etc/lxc/default.conf:lxc.network.link = br0
And on 15.04 I've done a
On Sat, 16 May 2015 08:03:26 PM Kevin LaTona wrote:
With a LXD based LXC container what iptables magic does one need to
be able to access these 10.0.3.x containers from outside that local
network?
So far I got it so I log into a 10.0.3.x based container and ping the
outside world.
The last
On Fri, 15 May 2015 10:54:08 PM Kevin LaTona wrote:
I was reading about ways in legacy LXC of being able to have the DHCP server
assign static IP's to containers at startup based upon container name.
If one is using Ubuntu 15.04, systemd and LXD is that still possible?
Hey Kevin, I just set
On 05/04/15 02:22, david.an...@bli.uzh.ch wrote:
I see that nobody replied to your question.
Have you made any progress in the meantime?
Not really. I've actually dropped back to using pre-systemd trusty in
privileged containers until ubuntu 15.04 is released hoping that might
provide better
What is the status of redistribution rights for the LXD logo?
I want to start a series of blog posts about my experiences with LXD
so the current logo would be nice to re-use but I don't want Canonical
lawyers, or anyone, to come after me with nasty takedown notices.
Is there a tutorial or blog post about using lxc snapshot anywhere?
~ lxc version
0.10
~ lxc list
++-+---+--+---+---+
|NAME| STATE | IPV4 | IPV6 | EPHEMERAL | SNAPSHOTS |
On Fri, 5 Jun 2015 05:05:21 PM Serge Hallyn wrote:
Does this mean that btrfs is considered a second class option
It is, for a few reasons.
Sorry to persist with this but would you mind elaborating briefly on
some of those reasons or point me to further discussion please?
We
On Thu, 4 Jun 2015 01:33:03 AM Tobby Banerjee wrote:
https://www.flockport.com/start
What's with this login first crap?
https://www.flockport.com/download/flockport-install.tar.xz
I don't appreciate being spammed on a public mailing list like this no
matter how good you think your intentions
I was excited to see 0.11 arrive on a wily host and thought it might
solve a few of the issues with systemd-journal not starting up and
therefor neither the network but it doesn't seem so. In fact after a
launch of todays vivid image, which starts, there is no IP (as before)
and I can't exec bash
On Mon, 15 Jun 2015 10:33:14 AM Genco Yilmaz wrote:
Is there any way to link multiple containers without using softbridge
+veth pair by using network.type vlan? or what is the best practice
in this type of topology?
If you just want to expose containers to the hosts private or public
And indeed it does so lxd config set core.https_address 192.168.0.2 got
me a port :8443 on my local host as soon as I hit enter, as advertised.
However after a reboot I got an error about no port for the config option
and lxd refused to start which made it awkward to update the config option
so
Just updated kubuntu 15.10 and rebooted and now getting this error...
~ lxc start goldcoast.org --debug
DBUG[08-16|20:46:19] fingering the daemon
DBUG[08-16|20:46:19] raw response:
I just did an update on a 15.10 host and a new version of lxd and lxd-client
showed up but on reboot my lxcbr0 bridge does not get created, as it had been
up till now. The only journald error I can see is...
Aug 23 15:39:29 mbox lxd[2980]: error: cannot listen on https socket: listen
tcp
To continue from the previous post, I just noticed this...
Aug 23 16:50:04 mbox audit[2550]: AVC apparmor=DENIED operation=mount info=failed type match error=-13
profile=lxc-container-default name=/sys/fs/cgroup/ pid=2550 comm=systemd flags=ro, nosuid, nodev, noexec,
remount, strictatime
On 24/08/15 08:19, Stéphane Graber wrote:
I'm not aware of any change there. Can you manually run:
sudo lxd --group lxd --debug
Thanks for looking into this and apologies to everyone else for a large dump.
Just 2 containers, this is the above startup then a lxc list then a lxc start
On Sunday, August 16, 2015 08:50:54 PM Mark Constable wrote:
Just updated kubuntu 15.10 and rebooted and now getting this error...
error saving config file for the container failed
I just did an strace lxc start goldcoast.org and it seems to be getting stuck
using the API so I suspect my
I have 2 *buntu 15.10 hosts and my local one has a few trusty, utopic and
wily containers. I've just updated a local LAN remote NAS to wily (so both
ends run the same version of lxd/lxc) and want to test copying and migration.
However, neither my local or remote test machines have anything
On Tuesday, August 11, 2015 10:25:47 PM Kevin LaTona wrote:
However, neither my local or remote test machines have anything running on
port 8443. Is there some trick to start lxd plus access via port 8443?
Yes, there is :-)
There was an issue like this back around 0.8 but it was fixed.
Hey
I'm just trying to do a simple rsync backup of /var/lib/lxd/ to a local NAS and
get
these errors as root on the host. I understand the issue of subgids and subuids
not
being valid on the target machine (the below errors are coming from the
destination)
outside the normal range of uid/gids but
On Sat, 8 Aug 2015 06:32:26 PM Serge Hallyn wrote:
I'm just trying to do a simple rsync backup of /var/lib/lxd/ to a local NAS
Not valid - is that because the NAS only supports a single uid, i.e. it's vfat
or somesuch? Or does it support a smaller range?
No it's a ubuntu 15.04 server based
On Monday, July 20, 2015 08:44:12 AM Mahesh Pujari wrote:
I had created a container with default settings and now it fails to start
due to space issues, how can I increase space of a container which has
exhausted its space.
Most likely you have to deal with this on a lower filesystem or
*buntu wily host and unprivileged lxd containers. This used to work but as
you can see I seem to need dbus... on a headless server!
lxc exec w1 -- bash -c 'systemctl restart networking.service'
Failed to get D-Bus connection: No such file or directory
Using a wily image from today, how do I
On Thursday, July 23, 2015 01:32:33 PM Fajar A. Nugraha wrote:
How do you folks get a working network with current wily containers?
IIRC networking on unprivileged systemd container was broken due to
some changes in systemd. This is true even on current release (vivid).
Yep, something
On Thursday, July 23, 2015 02:51:56 PM Fajar A. Nugraha wrote:
That's a bizarre workaround. It looks like the best workaround for
now is to stick to utopic containers and try again in another month.
I believe I've said this before, but you'd better not use utopic as it's
EOLed today. To
On 07/10/15 18:53, Stéphane Graber wrote:
Is there any chance this restriction could be loosened slightly to include
a dot char to re-enable a FQDN for container names?
Not all operating systems we may run on at some point support dots in
their hostnames, so allowing this would make things
lxc v0.19 on Ubuntu 15.10 host.
~ lxc launch wily abc
Creating abc done.
Starting abc done.
~ lxc launch wily abc.lxc
Creating abc.lxc error: Invalid container name
The 2nd one above used to work.
Why are dotted domain-like container names now invalid?
So not to hijack Peter Steeles CentOS thread I'd like to ask a similar question
about the best way to tweak either the LXC network settings or
/etc/network/interfaces to provide the missing pieces for non-NAT bridging.
I modify lxc-net to bring up a bridge using my native internal LAN network
On 29/08/15 23:54, Peter Steele wrote:
For example, I see references to the file /etc/network/interfaces. Is this an
LXC thing or is this a standard file in Ubuntu networking?
It's a standard pre-systemd debian/ubuntu network config file.
Mark Constable asked a related question stemming
On 08/12/15 18:07, Eldon Kuzhyelil wrote:
Okay i am now doing doing it with ethernet. So basically i am trying to
setup a webserver in my lxc container and my system is connected to router
via ethernet cable.I want the web page to be visible from my another system
connected to this LAN. What
On 09/12/15 21:41, Eldon Kuzhyelil wrote:
auto lo
iface lo inet loopback
You may be missing...
auto eth0
iface eth0 inet manual
auto br0
iface br0 inet static
[...]
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
On 07/12/15 15:41, Eldon Kuzhyelil wrote:
Mr.Tamas if u have any documents for the options you have mentioned please
provide them as i am new to this domain.
If you are testing LXD on a laptop then keep in mind you can't use bridging via
WIFI so if you can use an ethernet cable from your main
On 18/11/15 08:22, Robert Koretsky wrote:
I have successfully installed and created/started LXC containers on Ubuntu
15.10,
but cannot get them to be visible on my home network. I do an ifconfig on both
the host and in a container, and see the IPv4 address of lxcbr0 as 10.0.3.1,
but after
Is anyone aware of any kind of systemd-smart/websocket based web frontend for
the lxd daemon similar in scope to http://cockpit-project.org/ ?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
Outside a container on the host I can...
~ /sbin/setcap cap_net_bind_service=+ep /usr/bin/caddy
~ getcap /usr/bin/caddy
/usr/bin/caddy = cap_net_bind_service+ep
but inside a container I get...
~ /sbin/setcap cap_net_bind_service=+ep /usr/bin/caddy
Failed to set capabilities on file
FWIW another package that requires setcap. This is the first one I've seen
that falls back to setuid OOTB.
Setting up mtr-tiny (0.86-1) ...
Failed to set capabilities on file `/usr/bin/mtr' (Invalid argument)
The value of the capability argument is not permitted for a file. Or the file
is not a
Using latest freshly reinstalled xenial host and containers with
2.0.0~beta4-0ubuntu4
which got the packages installed after removing everything and starting again
but I've had this problem for a couple of weeks now...
~ lxc image list
On 25/02/16 20:16, Tamas Papp wrote:
# /sbin/setcap 'cap_net_bind_service=+ep' /usr/bin/nodejs
Failed to set capabilities on file `/usr/bin/nodejs' (Invalid argument)
The value of the capability argument is not permitted for a file. Or the file
is not a regular (non-symlink) file
Can we
On 26/02/16 05:56, Serge Hallyn wrote:
Hopefully in the next month or two I'll get time to get that
working in the kernel. Which means a few more months before
it'd be really available.
Can we expect it to be backported to Xenial?
No, but the HWE and such kernels will have it. They are
On 19/02/16 02:32, Serge Hallyn wrote:
# for containers to allow suid exec
echo 0 > /proc/sys/fs/protected_hardlinks
on the host but that is going to be awkward for folks who do not happen
to know this "trick" meaning generally trying to install the courier-mta
package on unpriv containers is
On 19/02/16 12:21, Serge Hallyn wrote:
Unpacking systemd (229-1ubuntu2) over (228-5ubuntu3) ...
dpkg: error processing archive
/var/cache/apt/archives/systemd_229-1ubuntu2_amd64.deb (--unpack):
unable to make backup link of './bin/systemctl' before installing new
version: Operation not
On 14/02/16 03:20, Serge Hallyn wrote:
but inside a container I get...
~ /sbin/setcap cap_net_bind_service=+ep /usr/bin/caddy
Failed to set capabilities on file `/usr/bin/caddy' (Invalid argument)
If not in a user namespace, ... well it works for me, but you may
have to edit the files under
On 02/03/16 01:34, Benoit GEORGELIN - Association Web4all wrote:
User A will have his own space for containers
User B will have his own space for containers
They should do "lxc-ls -f" or "lxc list" and see only their own containers
Maybe this is not a typical use case ?
I think the best way
I'm not sure if this is already possible but a suggestion for lxc list
would be to provide a "clean" output option without ascii borders. Using
mysql as an example it would be neat if something like this was possible...
[[ $(lxc list $HOST -BN -cs) = RUNNING ]] && echo yay || echo nay
~ mysql
On 10/03/16 18:54, Stéphane Graber wrote:
We've had a few folks ask for a --format option of some sort which
would allow them to get the info in csv, tabular or json/yaml
format.
One simple approach could be that if one used the default lxc list
with anything other than -c (or --columns) then
On 09/03/16 16:14, Stéphane Graber wrote:
Where do I find the documentation for all possible memory/swap/whatever
limits that can be applied to LXD 2.0.0~rc2-0ubuntu2 unpriv containers?
And boot parameters etc.
https://github.com/lxc/lxd/blob/master/specs/configuration.md
Thanks.
Where do
I've done this 100s of times before but for some reason I'm getting an
error trying to start an unpriv container. Any clues?
Xenial LXD 2.0.0~rc2-0ubuntu2 w/ btrfs
~ lxc image copy upstream:/ubuntu/xenial/amd64 local: --alias=xenial
Image copied successfully!
~ lxc image list
On 09/03/16 17:01, Stéphane Graber wrote:
Where do I find the kernel boot parameters to enable memory and swap limits?
swapaccount=1
https://www.kernel.org/doc/Documentation/kernel-parameters.txt
Thanks yet again. Now I have something to google for would you mind confirming
this one is or is
Where do I find the documentation for all possible memory/swap/whatever
limits that can be applied to LXD 2.0.0~rc2-0ubuntu2 unpriv containers?
And boot parameters etc.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
On 17/03/16 23:01, Janne Savikko wrote:
You can not use filters to list running or stopped containers. Lxc
start or stop do not support filters, only container name (or names).
You though can always pipe commands if you want to stop dozens of
containers whose names begin with "web" (note! lxc
On 15/03/16 13:03, efersept wrote:
Thank you Fajar, I have tried putting entries in
/etc/network/interfaces on an Ubuntu host but they are completely
ignored. Well that is not completely true, static IPs can be set for
eth0 but bridge entries and wlan0 entries are ignored The only
success I have
On 09/05/16 10:18, Ronald Kelley wrote:
I am trying to get some data points on how many containers we can
run on a single host if all the containers run the same applications
(eg: Wordpress w/nginx, php-fpm, mysql). We have a number of LXD 2.0
servers running on Ubuntu 16.04 - each server has 5G
The only 2 options for lxc init are dir and zfs.
Why no btrfs option?
Or, if I chose either one will the snapshot and send/receive advantages
of btrfs still be taken advantage of? If so, which one should I chose?
___
lxc-users mailing list
I'd like to have LXD manage a host network aware bridge and it almost
works except for needing two extra manual "brctl addif" and "ip route
add default via" steps. Would some variation of this...
lxc network set lxdbr0 ipv4.routes (set default gw)
possibly work for a default route?
On 05/09/16 19:38, Nicola Volpini wrote:
> As per subject: is there any existing project able to manage the
> lifecycle of LXD containers via some form of frontend/webgui?
> [...]
> Anyone out there who managed to integrate these tools and LXD? I
> would be cool to do for LXD what has been done
On 19/10/16 05:02, David Favor wrote:
> Looking for best way to change 10.87.167.115 to 144.217.33.224
> (static/public IP). Prefer doing this in a way where communication
> between host/container + container/container works without adding
> iptables rules, which become complex to manage with
On 19/10/16 07:10, David Favor wrote:
> I'd prefer the "tweak each container config" approach.
> Be great if someone could provide a URL for an example.
Very hard to impossible to find examples of using lxc config to set
the IP of each container and I'm still not sure it's even possible.
Can
Ubuntu zesty btrfs host, xenial zfs containers. How do I increase the size of
the default lxd-loop pool above 10Gb?
lxd-loop/containers/devzfs 2.3G 1.8G 477M 80%
/var/lib/lxd/containers/dev.zfs
lxd-loop/containers/docker zfs 7.4G 7.0G 477M 94%
/var/lib/lxd/containers/docker.zfs
I've tried
On 10/06/17 11:40, Michael Johnson wrote:
I'm able to configure a bridge on the host, and the host uses the
bridge just fine. How do I get the container to use the bridge? The
container seems to ignore all interfaces not created by 'lxc network
create'. I'm guessing because iptables gets
Is it possible to mount a swap file inside a zfs loop based container?
If so, how would I first disable the host swap inside a container?
I tried this...
lxc profile set medium limits.memory.swap false
which gets me this profile...
~ lxc profile show medium
config:
limits.cpu: "2"
Just to complete this thread and kind of mark it [SOLVED] I got
back to getting this script(s) to 99% work after losing my entire
primary BTRFS drive because some typo set my boot partition to
"zfs_volume" (yikes!)
https://raw.githubusercontent.com/netserva/sh/master/bin/setup-lxd
I know to the
On 5/22/17 12:28 PM, Fajar A. Nugraha wrote:
Yes but I also want the current disk usage to be available inside
the container so that, for instance, df returns realistic results.
Have you tried lxd with zfs?
Yes, zfs (pool per container) is what I am currently using here...
On 5/19/17 5:06 PM, Fajar A. Nugraha wrote:
I'm trying to automate a simple setup of LXD via a bash script and
I'm not sure of the best way to provide some preset arguments to "lxd
init", if at all possible. Specifically...
Did you try "lxd --help"?
Sigh, not for the last year or two, thanks
On 5/21/17 4:02 PM, Jeff Kowalczyk wrote:
My question, is it reasonable to provide a separate profile and
zfs pool per container and is there a better or more efficient way
to get the same end result?
Will disk limits work for you?
https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
On 5/21/17 11:16 PM, gunnar.wagner wrote:
just for my understanding ... you want to monitor disk usage on the
LXD host, right?
Yes but I also want the current disk usage to be available inside the
container so that, for instance, df returns realistic results.
Using a zfs pool per container
I'm trying to automate a simple setup of LXD via a bash script and I'm
not sure of the best way to provide some preset arguments to "lxd init",
if at all possible. Specifically...
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool:
On 30/05/17 10:17, Luis Michael Ibarra wrote:
For now we have discussions, Core dev blogs, github *md files, lxd
wiki, etc. Shouldn't be useful to have an official documentation
channel?
I lean towards an independent option so along those lines this is one
possibly crazy suggestion, FWIW...
-
On 01/06/17 02:34, Adil Baig wrote:
lxc config device add mycontainer etchosts disk path=/etc/hosts
source=/etc/hosts
1. Is very cool! I tried it and it works.
Yes, a good hint to know about, thanks simplyadilb.
I am more interested in 2. as it seems more future proof when we move
away
I asked this question ~18 months ago...
https://lists.linuxcontainers.org/pipermail/lxc-users/2015-November/010516.html
Anyone aware of any new Cockpit module development for LXD?
As I noted before, any Canonical OpenStack based service is way too
heavy for what I want. Cockpit itself has
On 15/06/17 12:56, Stéphane Graber wrote:
https://lists.linuxcontainers.org/pipermail/lxc-users/2015-November/010516.html
Anyone aware of any new Cockpit module development for LXD?
But we do have a good friend, Martin Pitt, who's working on the
Cockpit team and who's pretty familiar with LXD.
On 10/06/17 12:03, Michael Johnson wrote:
Hi Mark. Thanks for responding. I've done exactly what you suggest, and
here is the result from within the container:
ip -4 route show
default via 192.168.0.1 dev eth0 metric 12
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.36
[...]
Has something changed re networking with LXD 3.0 such that when
using a macvlan that the host CAN ping a container?
According to what I previously understood, and supported by this
comment..
https://github.com/lxc/lxd/issues/3871#issuecomment-333124249
and the main reason I hadn't bothered even
On 5/5/18 5:43 PM, Janjaap Bos wrote:
To be able to ping a container macvlan interface, you need to have a
macvlan interface configured on the host.
Thank you for the host macvlan snippet but I CAN actually ping the
container from the host (but not the host from inside the container)
and that
On 5/6/18 4:04 AM, Michel Jansens wrote:
how come I can ping the container from my host when I just set up
that container using macvlan?
Well, on my system with latest install of Ubuntu 18.04 and LXD 3.0,
the host can’t reach a container in macvlan setup. the container
can’t connect to the
On 5/3/18 4:09 PM, Kees Bos wrote:
> On Thu, 2018-05-03 at 12:58 +0900, Tomasz Chmielewski wrote:
>> Reproducing is easy:
>>
>> # lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp
>>
>> Then wait a few secs until it starts - "lxc list" will show it has
>> IPv6 address (if your bridge
On 5/3/18 12:42 PM, Tomasz Chmielewski wrote:
> > Today or yesterday, bionic image launched in LXD is not getting an IPv4
> > address. It is getting an IPv6 address.
If you do a "lxc profile show default" you will probably find it doesn't
have an IPv4 network attached by default. I haven't yet
On 10/08/18 17:20, Pierre Couderc wrote:
Note that lxc1 and lxd from snap uses different directories than
lxd from package.
Sorry for the noise : I use lxd (from sources on debian), and I had
not seen that /var/lib/lxc exits but is empty...
FWIW /var/lib/lxc != /var/lib/lxd
On 06/04/18 03:33, Bhangui, Avadhut Upendra wrote:
> I have a requirement that the solution running inside the container
> should be able to communicate to services in public cloud and also
> with some services on the host machine.
>
> 1. How do I setup the networking of this container? 2. When
On 22/4/18 3:21 am, David Favor wrote:
> Removing Netplan will work temporarily, until all the old networking
> plumbing is completely removed. Better to start moving to Netplan
> now, before some future update removes old processing of your
> /etc/network/interfaces files + all your networking
LXD 2.21 with *buntu a 1 month old bionic host and new containers. When
installing something like postfix I am getting this error, which obviously
cripples postfix...
postfix/postfix-script: warning: not set-gid or not owner+group+world
executable: /usr/sbin/postqueue
postfix/postfix-script:
90 matches
Mail list logo