Re: [lxc-users] Snap 2.20 - Default Text Editor

2017-11-21 Thread Ron Kelley
sudo update-alternatives --config editor

http://vim.wikia.com/wiki/Set_Vim_as_your_default_editor_for_Unix





> On Nov 21, 2017, at 7:49 PM, Lai Wei-Hwa  wrote:
> 
> Thanks, but that's the problem, it's still opening in VI
> 
> Thanks! 
> Lai
> 
> - Original Message -
> From: "Björn Fischer" 
> To: "lxc-users" 
> Sent: Tuesday, November 21, 2017 7:46:58 PM
> Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor
> 
> Hi,
> 
>> $ lxc profile edit default
>> Opens in VI even though my editor is nano (save the flaming)
>> 
>> How can we edit the default editor?
> 
> $ EDITOR=nano
> $ export EDITOR
> 
> Cheers,
> 
> Björn
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Using a mounted drive to handle storage pool

2017-11-21 Thread Ron Kelley
Perhaps you should use “bind” mount instead of symbolic links here?

mount -o bind /storage/lxd /var/snap/lxd

You probably also need to make sure survives a reboot.  



-Ron




> On Nov 21, 2017, at 5:47 PM, Lai Wei-Hwa  wrote:
> 
> In the following scenario, I:
> 
> $ sudo mount /dev/sdb /storage
> 
> Then, when I do:
> 
> $ sudo ln -s /storage/lxd lxd
> $ snap install lxd
> $ sudo lxd init
> error: Unable to talk to LXD: Get http://unix.socket/1.0: dial unix 
> /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory
> 
> 
> 
> Thanks! 
> Lai
> 
> From: "Lai Wei-Hwa" 
> To: "lxc-users" 
> Sent: Tuesday, November 21, 2017 1:37:18 PM
> Subject: [lxc-users] Using a mounted drive to handle storage pool
> 
> I've currently migrated LXD from canonical PPA to Snap. 
> 
> I have 2 RAIDS:
>   • /dev/sda - ext4 (this is root device)
>   • /dev/sdb - brtfs (where I want my pool to be with the containers and 
> snapshots)
> How/where should I mount my btrfs device? What's the best practice in having 
> the pool be in a non-root device? 
> 
> There are a few approaches I can see
>   • mount /dev/sdb to /var/snap/lxd (or /var/lib/lxd - if you're using 
> PPA) ... then: lxd init
>   • mount /dev/sdb to /storage and: ln -s /storage/lxd /var/snap/lxd ... 
> then: lxd init
>   • lxd init and choose existing block device /dev/sdb
> Whats the best practice and why?
> 
> Also, I'd love it if LXD could make this a little easier and let users more 
> easily define where the storage pool will be located. 
> 
> Best Regards,
> 
> Lai 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-20 Thread Ron Kelley

> On Nov 20, 2017, at 1:58 AM, Marat Khalili <m...@rqc.ru> wrote:
> 
> On 19/11/17 22:45, Ron Kelley wrote:
>> In all seriousness, I just ran some tests on my servers to see if SSH is 
>> still the bottleneck on rsync.  These servers have dual 10G NICs (linux 
>> bond), 3.6GHz CPU, and 32G RAM.  I found some interesting data points:
>> 
>> * Running the command "pv /dev/zero | ssh $REMOTE_SERVER 'cat > /dev/null’” 
>> I was able to get about 235MB/sec between two servers with ssh pegged at 
>> 100% CPU usage. [...]
>> In the end, rsync over NFS (using 10G networking) is much faster than rsync 
>> using SSH keys in my environment.  Maybe your environment is different or 
>> you use different ciphers?
> 
> Very good data points. I agree that you can saturate ssh if you have 10G 
> network connection and either SSDs or some expensive HDD arrays on both sides 
> and some sequential data to transfer. If you don't have any of listed items, 
> ssh does not slow down things.
Agreed as well.  In fact, I was a little surprised to see a “typical” server 
run over 200MB/sec in SSH traffic.  But, to be honest, it has been a while 
since I have done any sort of ssh speed tests.


> As for not trusting the LAN with unencrypted traffic, I would argue either 
> the security policies are not well enforced or the server uses insecure NFS 
> mount options.  I have no reason not to trust my LAN.
> I was just afraid that someone reading your post would copy-paste your 
> configuration to use over less secure LAN or even WAN. (I admit this is not a 
> big problem for original poster since FTP is not much better in this regard.)
Ah, yes, understood.  If someone is concerned about unencrypted rsync traffic, 
you could add tighter security options to the rsyncd.conf file and/or use NFSv4 
with higher security.


> 
> 
> --
> 
> With Best Regards,
> Marat Khalili
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-19 Thread Ron Kelley
 I just checked my calendar, and it says November, 19, 2017.  So, I am 
going to say, 21st century! 

In all seriousness, I just ran some tests on my servers to see if SSH is still 
the bottleneck on rsync.  These servers have dual 10G NICs (linux bond), 3.6GHz 
CPU, and 32G RAM.  I found some interesting data points:

* Running the command "pv /dev/zero | ssh $REMOTE_SERVER 'cat > /dev/null’” I 
was able to get about 235MB/sec between two servers with ssh pegged at 100% CPU 
usage. 

* I mounted an NFS share from one server to the other and was able to hit 
450MB/sec via rsync across an NFS mount (rsync pegged at 100% CPU).  

* I ran the command “rsync --progress root@:/export/tmp/file1 
/mnt/ramdisk/file1” gave me 130MB/sec with sshd pegged at 100% CPU and rsync at 
22% CPU.


In the end, rsync over NFS (using 10G networking) is much faster than rsync 
using SSH keys in my environment.  Maybe your environment is different or you 
use different ciphers?



As for not trusting the LAN with unencrypted traffic, I would argue either the 
security policies are not well enforced or the server uses insecure NFS mount 
options.  I have no reason not to trust my LAN.

-Ron





> On Nov 19, 2017, at 10:39 AM, Marat Khalili  wrote:
> 
>> My experience has shown rsync over ssh is by far the slowest because of the 
>> ssh cipher. 
> 
> What century is this experience from? Any modern hardware can encrypt at IO 
> speed several times over. Even LAN, on the other hand, cannot be trusted with 
> unencrypted data.
> -- 
> 
> With Best Regards,
> Marat Khalili
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-19 Thread Ron Kelley
Maybe I missed something here, but you have a government system that allows FTP 
but not NFS?



> On Nov 19, 2017, at 10:17 AM, Saint Michael <vene...@gmail.com> wrote:
> 
> The server is at a the government. I would go to jail.
> But thanks for the input.
> 
> On Sun, Nov 19, 2017 at 8:03 AM, Ron Kelley <rkelley...@gmail.com> wrote:
> Can you install an rsync daemon on the server side?  If so, simply create 
> /etc/rsyncd.conf file with this:
> 
> [BACKUP]
> comment = Allow RW access for backups
> path = /usr/local/backup_dir
> uid = root
> hosts allow = 192.168.1.46, 192.168.1.47
> read only = yes
> 
> 
> Next, on each of your remote clients, simply run rsync via cron job to your 
> server.   Something like this:
> 
> /usr/bin/rsync -arv / root@192.168.1.10::BACKUP/192.168.1.46/
> 
> The above assumes your server IP is 192.168.1.10 and your client IP is 
> 192.168.1.46.  Also, note the trailing slash (/) on the second rsync 
> argument.  It ensures the files get put into the right directory.
> 
> I run rsync scripts each night on lots of client machines (specifically LXD 
> containers running wordpress) to a central backup server.  It works great.
> 
> 
> BTW: I don’t know if I could say rsync over a network mount is the worst 
> possible solution ever.  I have used rsync for a long time and using a 
> variety of network connections (rsync daemon, nfs, rsync via ssh).  My 
> experience has shown rsync over ssh is by far the slowest because of the ssh 
> cipher.  Rsync over nfs mount is very fast - almost as fast as a local copy.
> 
> -Ron
> 
> 
> 
> 
> 
> 
> > On Nov 18, 2017, at 9:37 PM, Saint Michael <vene...@gmail.com> wrote:
> >
> > How do you rsync over SSH when all you have is a Plain Old FTP server to 
> > connect to?
> > Maybe there is something I need to learn.
> >
> > On Sat, Nov 18, 2017 at 5:08 PM, Andrey Repin <anrdae...@yandex.ru> wrote:
> > Greetings, Saint Michael!
> >
> > > I need to do an rsync of hundreds of files very morning. The least complex
> > > way to achieve that is to do an rsync with some parameters that narrow 
> > > down what files I need.
> > > Is there a better way?
> >
> > rsync over a network mount is the WORST POSSIBLE SOLUTION EVER.
> > Use normal rsync over SSH, it will be much faster, even if you do checksum
> > syncs.
> >
> >
> > --
> > With best regards,
> > Andrey Repin
> > Sunday, November 19, 2017 01:07:34
> >
> > Sorry for my terrible english...
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-19 Thread Ron Kelley
Can you install an rsync daemon on the server side?  If so, simply create 
/etc/rsyncd.conf file with this:

[BACKUP]
comment = Allow RW access for backups
path = /usr/local/backup_dir
uid = root
hosts allow = 192.168.1.46, 192.168.1.47
read only = yes


Next, on each of your remote clients, simply run rsync via cron job to your 
server.   Something like this:

/usr/bin/rsync -arv / root@192.168.1.10::BACKUP/192.168.1.46/

The above assumes your server IP is 192.168.1.10 and your client IP is 
192.168.1.46.  Also, note the trailing slash (/) on the second rsync argument.  
It ensures the files get put into the right directory.

I run rsync scripts each night on lots of client machines (specifically LXD 
containers running wordpress) to a central backup server.  It works great.


BTW: I don’t know if I could say rsync over a network mount is the worst 
possible solution ever.  I have used rsync for a long time and using a variety 
of network connections (rsync daemon, nfs, rsync via ssh).  My experience has 
shown rsync over ssh is by far the slowest because of the ssh cipher.  Rsync 
over nfs mount is very fast - almost as fast as a local copy.

-Ron






> On Nov 18, 2017, at 9:37 PM, Saint Michael  wrote:
> 
> How do you rsync over SSH when all you have is a Plain Old FTP server to 
> connect to?
> Maybe there is something I need to learn.
> 
> On Sat, Nov 18, 2017 at 5:08 PM, Andrey Repin  wrote:
> Greetings, Saint Michael!
> 
> > I need to do an rsync of hundreds of files very morning. The least complex
> > way to achieve that is to do an rsync with some parameters that narrow down 
> > what files I need.
> > Is there a better way?
> 
> rsync over a network mount is the WORST POSSIBLE SOLUTION EVER.
> Use normal rsync over SSH, it will be much faster, even if you do checksum
> syncs.
> 
> 
> --
> With best regards,
> Andrey Repin
> Sunday, November 19, 2017 01:07:34
> 
> Sorry for my terrible english...
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC 2.16 - vxlan. migrated from to

2017-09-26 Thread Ron Kelley
Looks like I mistakenly said unicast.  In fact, we are using multicast group IP 
(239.0.0.1) to setup our VXLAN interfaces (please see below).  Also, in the 
example below, 10.250.1.21 is bound the eth0 (our management interface) and 
172.18.22.21 is bound to eth1 (VXLAN interface).  We have 8 servers running 
thus far, and just increment the last octet to identify the server (i.e.: 
.21=server-01, .22=server-02, etc).

Some more detailed information:

# --
# The following script is run at boot time to setup VxLAN interfaces
# /usr/local/bin/setup_vxlans.sh <up|down>
# --
#!/bin/bash

# Script to configure VxLAN networks
ACTION="$1"

[ $ACTION == "up"   ]   && ip -4 route add 239.0.0.1 dev eth1
[ $ACTION == "down"   ] && ip -4 route del 239.0.0.1 dev eth1

for i in {1101..1130}
do
   [ $ACTION == "up"   ]  && ip link add vxlan.${i} type vxlan group 239.0.0.1 
dev eth1 dstport 0 id ${i} && ip link set vxlan.${i} up
   [ $ACTION == "down" ]  && ip link set vxlan.${i} down && ip link del 
vxlan.${i}
done
# --



# --
# The /etc/network/interfaces file:
# --
source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 10.250.1.21
netmask 255.255.255.0
broadcast 10.251.1.255
gateway 10.251.1.1
dns-nameservers 8.8.8.8 4.2.2.2
auto eth1
iface eth1 inet static
  address 172.17.22.21
  netmask 255.255.255.0
# --


# --
# Output from “lxc profile show ” snippet
# --
...
  limits.cpu: "2"
  limits.memory: 2000MB
  limits.memory.swap: “true"
...
devices:
  eth0:
name: eth0
nictype: macvlan
parent: vxlan.1105
type: nic
...
# --


Anything seem wrong or out of place?

Thanks,

-Ron




> On Sep 26, 2017, at 10:59 AM, Stéphane Graber <stgra...@ubuntu.com> wrote:
> 
> Hi,
> 
> Ok, so you're doing your own VXLAN youtside of LXD and then connecting
> containers directly to it.
> 
> The kernel error is very odd for unicast vxlan... there's really no
> reason for it to ever use the other interface...
> 
> So I'm assuming the 10.250.1.21 IP is on eth0 and 172.18.22.21 on eth1
> (or the reverse)? that is, you don't have both subnets on eth1.
> 
> On Tue, Sep 26, 2017 at 09:52:37AM -0400, Ron Kelley wrote:
>> Thanks Stephane.
>> 
>> Here is a “lxc network list” on the hosts:
>> 
>> rkelley@LXD-QA-Server-04:~$ lxc network list
>> ++--+-+-+-+
>> |  NAME  |   TYPE   | MANAGED | DESCRIPTION | USED BY |
>> ++--+-+-+-+
>> | eth0   | physical | NO  | | 0   |
>> ++--+-+-+-+
>> | eth1   | physical | NO  | | 2   |
>> ++--+-+-+-+
>> | virbr0 | bridge   | NO  | | 0   |
>> ++--+-+-++
>> 
>> 
>> Also, we are using vxlan in unicast mode via eth1.  Each LXD server has a 
>> unicast IP address on eth1 that lives in a separate VLAN from eth0 on the 
>> directly connected network switch.  If both eth0 and eth1 were in the same 
>> VLAN, I could possible an issue.  Once a container is spun it, it is 
>> attached to a VXLAN interface off eth1 (i.e.: vxlan.1115)
>> 
>> Thus, I am scratching my head..
>> 
>> -Ron
>> 
>> 
>>> On Sep 26, 2017, at 9:02 AM, Stéphane Graber <stgra...@ubuntu.com> wrote:
>>> 
>>> On Sun, Sep 24, 2017 at 03:27:27PM -0400, Ron Kelley wrote:
>>>> Greetings all,
>>>> 
>>>> Trying to isolate a condition whereby a container providing firewall 
>>>> services seems to stop processing traffic for a short time.  We keep 
>>>> getting the below information in /var/log/syslog of the server running the 
>>>> firewall.  The IP addresses shown match the network interfaces of the 
>>>> remote LXD server running the web server.  These IPs are for the server 
>>>> itself, and not the container IP
>>>> 
>>>> Sep 24 15:10:25 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
>>>> 00:11:22:aa:66:

Re: [lxc-users] LXC 2.16 - vxlan. migrated from to

2017-09-26 Thread Ron Kelley
Thanks Stephane.

Here is a “lxc network list” on the hosts:

rkelley@LXD-QA-Server-04:~$ lxc network list
++--+-+-+-+
|  NAME  |   TYPE   | MANAGED | DESCRIPTION | USED BY |
++--+-+-+-+
| eth0   | physical | NO  | | 0   |
++--+-+-+-+
| eth1   | physical | NO  | | 2   |
++--+-+-+-+
| virbr0 | bridge   | NO  | | 0   |
++--+-+-++


Also, we are using vxlan in unicast mode via eth1.  Each LXD server has a 
unicast IP address on eth1 that lives in a separate VLAN from eth0 on the 
directly connected network switch.  If both eth0 and eth1 were in the same 
VLAN, I could possible an issue.  Once a container is spun it, it is attached 
to a VXLAN interface off eth1 (i.e.: vxlan.1115)

Thus, I am scratching my head..

-Ron


> On Sep 26, 2017, at 9:02 AM, Stéphane Graber <stgra...@ubuntu.com> wrote:
> 
> On Sun, Sep 24, 2017 at 03:27:27PM -0400, Ron Kelley wrote:
>> Greetings all,
>> 
>> Trying to isolate a condition whereby a container providing firewall 
>> services seems to stop processing traffic for a short time.  We keep getting 
>> the below information in /var/log/syslog of the server running the firewall. 
>>  The IP addresses shown match the network interfaces of the remote LXD 
>> server running the web server.  These IPs are for the server itself, and not 
>> the container IP
>> 
>> Sep 24 15:10:25 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
>> 00:11:22:aa:66:a3 migrated from 10.250.1.21  to 172.18.22.21
>> Sep 24 15:10:26 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
>> 00:11:22:aa:66:a3 migrated from 172.18.22.21 to 10.250.1.21 
>> Sep 24 15:10:27 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
>> 00:11:22:aa:66:a3 migrated from 10.250.1.21  to 172.18.22.21
>> Sep 24 15:10:28 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
>> 00:11:22:aa:66:a3 migrated from 172.18.22.21 to 10.250.1.21 
>> Sep 24 15:10:29 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
>> 00:11:22:aa:66:a3 migrated from 10.250.1.21  to 172.18.22.21
>> Sep 24 15:10:30 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
>> 00:11:22:aa:66:a3 migrated from 172.18.22.21 to 10.250.1.21 
>> Sep 24 15:10:31 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
>> 00:11:22:aa:66:a3 migrated from 10.250.1.21  to 172.18.22.21
>> Sep 24 15:10:32 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
>> 00:11:22:aa:66:a3 migrated from 172.18.22.21 to 10.250.1.21 
>> 
>> Notice how they migrate from one interface to another and then back again.  
>> Any idea as to why these messages are getting logged?
>> 
>> Thanks.
>> 
>> -Ron
> 
> Hmm, so I think I'm going to need a bit more details on the setup.
> Can you show the "lxc network show" for the network on both hosts?
> 
> My current guess is that you're using vxlan in multicast mode and both
> your hosts have two IPs on two subnets. Multicast VXLAN works on both
> those subnets and it can therefore see the same remote MAC on both,
> having it flip/flop between the two paths.
> 
> -- 
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC 2.16 - vxlan. migrated from to

2017-09-24 Thread Ron Kelley
Greetings all,

Trying to isolate a condition whereby a container providing firewall services 
seems to stop processing traffic for a short time.  We keep getting the below 
information in /var/log/syslog of the server running the firewall.  The IP 
addresses shown match the network interfaces of the remote LXD server running 
the web server.  These IPs are for the server itself, and not the container IP

Sep 24 15:10:25 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
00:11:22:aa:66:a3 migrated from 10.250.1.21  to 172.18.22.21
Sep 24 15:10:26 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
00:11:22:aa:66:a3 migrated from 172.18.22.21 to 10.250.1.21 
Sep 24 15:10:27 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
00:11:22:aa:66:a3 migrated from 10.250.1.21  to 172.18.22.21
Sep 24 15:10:28 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
00:11:22:aa:66:a3 migrated from 172.18.22.21 to 10.250.1.21 
Sep 24 15:10:29 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
00:11:22:aa:66:a3 migrated from 10.250.1.21  to 172.18.22.21
Sep 24 15:10:30 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
00:11:22:aa:66:a3 migrated from 172.18.22.21 to 10.250.1.21 
Sep 24 15:10:31 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
00:11:22:aa:66:a3 migrated from 10.250.1.21  to 172.18.22.21
Sep 24 15:10:32 LXD-Server-04 kernel: [144272.412154] vxlan.1104: 
00:11:22:aa:66:a3 migrated from 172.18.22.21 to 10.250.1.21 

Notice how they migrate from one interface to another and then back again.  Any 
idea as to why these messages are getting logged?

Thanks.

-Ron
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Setting hostname different than container name

2017-08-19 Thread Ron Kelley

Awesome, thanks Stephane. We can use the raw lxc command as a good fix.




On August 19, 2017 7:43:34 PM Stéphane Graber <stgra...@ubuntu.com> wrote:


On Sat, Aug 19, 2017 at 04:48:51PM -0400, Ron Kelley wrote:

Greetings all,

Trying to set the container hostname different that the container name.  
Editing /etc/hostname and /etc/hosts seems to work on Ubuntu but not on 
CentOS 6 containers.  What is the proper way to set the hostname so it is 
persistent among reboots for CentOS containers?  The LXD config guide 
(https://github.com/lxc/lxd/blob/master/doc/configuration.md) has no 
mention of a hostname keyword.


I'm not very familiar with RHEL/Centos and indeed couldn't find an easy
way (short of using /etc/rc.local) to set the hostname in a Centos 6
container the way I'd expect it to work...

It looks like centos6 looks at the kernel utsname and if it's already
set, won't change it, no matter what you put in /etc/hostname or
/etc/sysconfig/network. Though maybe I'm missing something and one of
our Red Hat / CentOS users will know how to get around this.


At the LXD level, you can use raw.lxc to force the utsname to another
value and that part does seem to work:

lxc config set c1 raw.lxc lxc.utsname=some-name

--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



--
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Setting hostname different than container name

2017-08-19 Thread Ron Kelley
Greetings all,

Trying to set the container hostname different that the container name.  
Editing /etc/hostname and /etc/hosts seems to work on Ubuntu but not on CentOS 
6 containers.  What is the proper way to set the hostname so it is persistent 
among reboots for CentOS containers?  The LXD config guide 
(https://github.com/lxc/lxd/blob/master/doc/configuration.md) has no mention of 
a hostname keyword.


Thanks.

-Ron

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? OVS / GRE - guest-transparent mesh networking across multiple hosts

2017-08-03 Thread Ron Kelley
We have implemented something similar to this using VXLAN (outside the scope of 
LXC).

Our setup: 6x servers colocated in the data center running LXD 2.15 - each 
server with 2x NICs: nic(a) for management and nic(b) 

* nic(a) is strictly used for all server management tasks (lxd commands)
* nic(b) is used for all VXLAN network segments


On each server, we provision ethernet interface eth1 with a private IP Address 
(i.e.: 172.20.0.x/24) and run the following script at boot to provision the 
VXLAN interfaces (via multicast):
---
#!/bin/bash

# Script to configure VxLAN networks
ACTION="$1"

case $ACTION in
  up)
ip -4 route add 239.0.0.1 dev eth1
for i in {1101..1130}; do ip link add vxlan.${i} type vxlan group 
239.0.0.1 dev eth1 dstport 0 id ${i} && ip link set vxlan.${i} up; done
   ;;
  down)
  ip -4 route del 239.0.0.1 dev eth1
  for i in {1101..1130}; do ip link set vxlan.${i} down && ip link del 
vxlan.${i}; done
   ;;
   *)
 echo " ${0} up|down"; exit
  ;;
esac
---

To get the containers talking, we simply assign a container to a respective 
VXLAN interface via the “lxc network attach” command like this:  
/usr/bin/lxc network attach vxlan.${VXLANID} ${HOSTNAME} eth0 eth0.

We have single-armed (i.e.: eth0) containers that live exclusively behind a 
VXLAN interface, and we have dual-armed servers (eth0 and eth1) hat act as 
firewall/NAT containers for a VXLAN segment.

It took a while to get it all working, but it works great.  We can move 
containers anywhere in our infrastructure without issue. 

Hope this helps!



-Ron




> On Aug 3, 2017, at 8:05 AM, Tomasz Chmielewski  wrote:
> 
> I think fan is single server only and / or won't cross different networks.
> 
> You may also take a look at https://www.tinc-vpn.org/
> 
> Tomasz
> https://lxadm.com
> 
> On Thursday, August 03, 2017 20:51 JST, Félix Archambault 
>  wrote: 
> 
>> Hi Amblard,
>> 
>> I have never used it, but this may be worth taking a look to solve your
>> problem:
>> 
>> https://wiki.ubuntu.com/FanNetworking
>> 
>> On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" 
>> wrote:
>> 
>> Hello,
>> 
>> I am deploying 10< bare metal servers to serve as hosts for containers
>> managed through LXD.
>> As the number of container grows, management of inter-container
>> running on different hosts becomes difficult to manage and need to be
>> streamlined.
>> 
>> The goal is to setup a 192.168.0.0/24 network over which containers
>> could communicate regardless of their host. The solutions I looked at
>> [1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of
>> bridge.driver: openvswitch configuration for LXD.
>> Note: baremetal servers are hosted on different physical networks and
>> use of multicast was ruled out.
>> 
>> An illustration of the goal architecture is similar to the image visible on
>> https://books.google.fr/books?id=vVMoDwAAQBAJ=PA168=
>> 6aJRw15HSf=PA197#v=onepage=false
>> Note: this extract is from a book about LXC, not LXD.
>> 
>> The point that is not clear is
>> - whether each container needs to have as many veth as there are
>> baremetal host, in which case [de]commission of a new baremetal would
>> require configuration updated of all existing containers (and
>> basically rule out this scenario)
>> - or whether it is possible to "hide" this mesh network at the host
>> level and have a single veth inside each container to access the
>> private network and communicate with all the other containers
>> regardless of their physical location and regardeless of the number of
>> physical peers
>> 
>> Has anyone built such a setup?
>> Does the OVS+GRE setup need to be build prior to LXD init or can LXD
>> automate part of the setup?
>> Online documentation is scarce on the topic so any help would be
>> appreciated.
>> 
>> Regards,
>> Amaury
>> 
>> [1] https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
>> [2] https://stackoverflow.com/questions/39094971/want-to-use
>> -the-vlan-feature-of-openvswitch-with-lxd-lxc
>> [3] https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne
>> tworking-on-ubuntu-16-04-lts/
>> 
>> 
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Fastest way to copy containers

2017-07-25 Thread Ron Kelley
As far as I know, LXD does not support incremental copies - either local or 
remote (“error: Container ‘blahblah2' already exists”).

The question I posed earlier was how to copy a container from one LXD server to 
another.  We need to migrate containers to different hosts so we can do 
maintenance on the container servers, and we have lots of containers to move.  
If we are using BTRFS, “lxc copy” will use btrfs send/receive which is very 
slow (~20MB/sec over a 10G network).  

The quick work-around is to create a shell container on the remote server (lxd 
init server:blablah2) then use rsync to copy the data.  It works much faster 
than btrfs send/receive.


-Ron





 
On Jul 25, 2017, at 11:11 AM, Rory Campbell-Lange  
wrote:

On 25/07/17, Andrey Repin (anrdae...@yandex.ru) wrote:
>> I am trying to copy sites from one LXD to another - both running BTRFS.
>> The normal “lxc copy” command uses btrfs send/receive which is terribly
>> slow.
> 
> btrfs send/receive is fast, when you send incremental copies of partitions.
> And of course it will be slower than rsync, since it works on a different
> level.

Are you copying to a btrfs seed image? In other words, if you made the
container from a base image, and then have snapshotted that base image
again as a copy target, the send/receive process should be very fast.

On the other hand for an initial image, it would be better to do a btrfs
snapshot assuming that the images are on the same volume. That should be
basically instantaneous.

Rory
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Fastest way to copy containers

2017-07-25 Thread Ron Kelley
Sorry, typo on my part.  Definitely “lxc init” does not show this option.

Thanks again for the pointer!



On Jul 25, 2017, at 8:39 AM, Fajar A. Nugraha <l...@fajar.net> wrote:

On Tue, Jul 25, 2017 at 7:23 PM, Ron Kelley <rkelley...@gmail.com 
<mailto:rkelley...@gmail.com>> wrote:
Thanks Fajar.

Interesting, I have not seen/used “lxd init” yet.  

It's 'lxc init'. 'lxd init' is something entirely different :)
 
The output of “lxc -h” does not show the init command.  Guess it must be a 
super-admin command since it is hidden :-)



I recall stephane(?) mention this command a long time ago. Worst-case scenario, 
'lxc launch' and 'lxc stop --force' should pretty much do the same thing, since 
the difference is whether the container is started or not :)

The key point here is 'use the smallest image available'. And then (when it's 
stopped) overwrite it (or in my case, replace it with send/receive from another 
dataset)

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Fastest way to copy containers

2017-07-25 Thread Ron Kelley
Thanks Fajar.

Interesting, I have not seen/used “lxd init” yet.  The output of “lxc -h” does 
not show the init command.  Guess it must be a super-admin command since it is 
hidden :-)

-Ron





 
On Jul 25, 2017, at 8:18 AM, Fajar A. Nugraha <l...@fajar.net> wrote:

On Tue, Jul 25, 2017 at 7:11 PM, Ron Kelley <rkelley...@gmail.com 
<mailto:rkelley...@gmail.com>> wrote:
Greetings all,

I am trying to copy sites from one LXD to another - both running BTRFS.  The 
normal “lxc copy” command uses btrfs send/receive which is terribly slow.  
Since rsync works much, much faster, is there a quick way to create the 
container “shell” on the remote server (and register it with LXD) and then 
manually rsync the data over?


probably 'lxc init', choosing the smallest available container (e.g. 
images:alpine/3.5). 
 
As an aside; I think LXD should allow the user to specify which copy tool to 
leverage when doing the copying.  Is that possible?


The intention was probably 'to use storage-specific method, which should be 
much faster' (which is true with zfs).

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Fastest way to copy containers

2017-07-25 Thread Ron Kelley
Greetings all,

I am trying to copy sites from one LXD to another - both running BTRFS.  The 
normal “lxc copy” command uses btrfs send/receive which is terribly slow.  
Since rsync works much, much faster, is there a quick way to create the 
container “shell” on the remote server (and register it with LXD) and then 
manually rsync the data over?

As an aside; I think LXD should allow the user to specify which copy tool to 
leverage when doing the copying.  Is that possible?


Thanks.

-Ron
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD 2.14 - Ubuntu 16.04 - kernel 4.4.0-57-generic - SWAP continuing to grow

2017-07-15 Thread Ron Kelley
Thanks for the great replies.  

Marat/Fajar:  How many servers do you guys have running in production, and what 
are their characteristics (RAM, CPU, workloads, etc)?  I am trying to see if 
our systems generally align to what you are running.  Running without swap 
seems rather drastic and removes the “safety net” in the case of a bad program. 
 In the end, we must have all containers/processes running 24/7.

tldr;

After digging into this a bit, it seems “top”, “top”, and “free” report similar 
swap usage, however, other tools report much less swap usage.  I found the 
following threads on the ‘net which include simple scripts to look in /proc and 
examine swap usage per process:

https://stackoverflow.com/questions/479953/how-to-find-out-which-processes-are-swapping-in-linux
https://www.cyberciti.biz/faq/linux-which-process-is-using-swap 

As some people pointed out, top/htop don’t accurately report the swap usage as 
they combine a number of memory fields together.  And, indeed, running the 
script in each container (looking at /proc) show markedly different results 
when all the numbers are added up.  For example, the “free” command on one of 
our servers reports 3G of swap in use, but the script that scans the /proc 
directory only shows 1.3G of real swap in use.  Very odd.

All that said, the real issue is to find out if one of our containers/processes 
has a memory leak (per Marat’s suggestion below).  Unfortunately, LXD does not 
provide an easy way to track per-container stats, thus we must “roll our own” 
tools.



-Ron




> On Jul 15, 2017, at 5:11 AM, Marat Khalili <m...@rqc.ru> wrote:
> 
> I'm using LXC, and I frequently observe some unused containers get swapped 
> out, even though system has plenty of RAM and no RAM limits are set. The only 
> bad effect I observe is couple of seconds delay when you first log into them 
> after some time. I guess it is absolutely normal since kernel tries to 
> maximize amount of memory available for disk caches.
> 
> If you don't like this behavior, instead of trying to fine tune kernel 
> parameters why not disable swap altogether? Many people run it this way, it's 
> mostly a matter of taste these days. (But first check your software for 
> leaks.)
> 
> > For example, our “server-4” machine shows 8G total RAM, 500MB free, 2.5G 
> > available, and 5G of buff/cache. Yet, swap is at 5.5GB and has been slowly 
> > growing over the past few days. It seems something is preventing the apps 
> > from using the RAM.
> 
> Did you identify what processes all this virtual memory belongs to?
> 
> > To be honest, we have been battling lots of memory/swap issues using LXD. 
> > We started with no tuning, but the app stack quickly ran out of memory. 
> 
> LXC/LXD is hardly responsible for your app stack memory usage. Either you 
> underestimated it or there's a memory leak somewhere.
> 
> > Given all the issues we have had with memory and swap using LXD, we are 
> > seriously considering moving back to the traditional VM approach until 
> > LXC/LXD is better “baked”.
> 
> Did your VMs use less memory? I don't think so. Limits could be better 
> enforced, but VMs don't magically give you infinite RAM. 
> -- 
> 
> With Best Regards,
> Marat Khalili
> 
> On July 14, 2017 9:58:57 PM GMT+03:00, Ron Kelley <rkelley...@gmail.com> 
> wrote:
> Wondering if anyone else has similar issues.
> 
> We have 5x LXD 2.12 servers running (U16.04 - kernel 4.4.0-57-generic - 8G 
> RAM, 19G SWAP).  Each server is running about 50 LXD containers - Wordpress 
> w/Nginx and PHP7.  The servers have been running for about 15 days now, and 
> swap space continues to grow.  In addition, the kswapd0 process starts 
> consuming CPU until we flush the system cache via "/bin/echo 3 > 
> /proc/sys/vm/drop_caches” command.
> 
> Our LXD profile looks like this:
> -
> config:
>   limits.cpu: "2"
>   limits.memory: 512MB
>   limits.memory.swap: "true"
>   limits.memory.swap.priority: "1"
> -
> 
> 
> We also have added these to /etc/sysctl.conf
> -
> vm.swappiness=10
> vm.vfs_cache_pressure=50
> -
> 
> A quick “top” output shows plenty of available Memory and buff/cache.  But, 
> for some reason, the system continues to swap out the app.  For example, our 
> “server-4” machine shows 8G total RAM, 500MB free, 2.5G available, and 5G of 
> buff/cache.  Yet, swap is at 5.5GB and has been slowly growing over the past 
> few days.  It seems something is preventing the apps from using the RAM.
> 
> 
> To be honest, we have been battling lots of memory/swap issues using LXD.  We 
> started with no tuning, but the app stack

[lxc-users] LXD 2.14 - Ubuntu 16.04 - kernel 4.4.0-57-generic - SWAP continuing to grow

2017-07-14 Thread Ron Kelley
Wondering if anyone else has similar issues.

We have 5x LXD 2.12 servers running (U16.04 - kernel 4.4.0-57-generic - 8G RAM, 
19G SWAP).  Each server is running about 50 LXD containers - Wordpress w/Nginx 
and PHP7.  The servers have been running for about 15 days now, and swap space 
continues to grow.  In addition, the kswapd0 process starts consuming CPU until 
we flush the system cache via "/bin/echo 3 > /proc/sys/vm/drop_caches” command.

Our LXD profile looks like this:
-
config:
  limits.cpu: "2"
  limits.memory: 512MB
  limits.memory.swap: "true"
  limits.memory.swap.priority: "1"
-


We also have added these to /etc/sysctl.conf
-
vm.swappiness=10
vm.vfs_cache_pressure=50
-

A quick “top” output shows plenty of available Memory and buff/cache.  But, for 
some reason, the system continues to swap out the app.  For example, our 
“server-4” machine shows 8G total RAM, 500MB free, 2.5G available, and 5G of 
buff/cache.  Yet, swap is at 5.5GB and has been slowly growing over the past 
few days.  It seems something is preventing the apps from using the RAM.


To be honest, we have been battling lots of memory/swap issues using LXD.  We 
started with no tuning, but the app stack quickly ran out of memory.  After 
editing the profile to allow 512MB RAM per container (and restarting the 
container), the kswapd0 issue happens.  Given all the issues we have had with 
memory and swap using LXD, we are seriously considering moving back to the 
traditional VM approach until LXC/LXD is better “baked”.


-Ron
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Profile settings to limit RAM and SWAP

2017-06-27 Thread Ron Kelley
Sorry - I have been trying a few things to get this working.  In my email 
below, I mistakenly said "512MB of RAM and 512MB of swap per container” but the 
example was using 512M RAM and 1G swap.

Still, the same issue exists.  How to properly limit the container to X-RAM and 
Y-SWAP (and have “free -m” and “top” show properly).




> On Jun 27, 2017, at 8:15 AM, Ron Kelley <rkelley...@gmail.com> wrote:
> 
> Greetings all,
> 
> Running LXD 2.14 on Ubuntu 16.04 and need to find a way to properly set 
> limits for RAM and SWAP for my containers.
> 
> The goal: profile 512MB of RAM and 512MB of swap per container (total 1G)
> 
> 
> My current profile:
> 
> config:  limits.cpu: "2"
>  limits.memory: 512MB
>  limits.memory.swap.priority: “1"
>  raw.lxc: lxc.cgroup.memory.memsw.limit_in_bytes = 1512M
> 
> 
> 
> 
> However, according to “top” and “free -m”, the available memory = (RAM - 
> ProcessesUsed) and does not include swap.  Example:
> 
> top - 12:08:40 up 1 min,  0 users,  load average: 0.59, 1.12, 2.23
> Tasks:  19 total,   2 running,  17 sleeping,   0 stopped,   0 zombie
> %Cpu(s): 11.8 us,  4.4 sy,  0.0 ni, 82.3 id,  0.2 wa,  0.0 hi,  1.4 si,  0.0 
> st
> KiB Mem :   524288 total,   263776 free,75492 used,   185020 buff/cache
> KiB Swap:  1024000 total,  1019936 free, 4064 used.   263776 avail Mem
> 
> 
> In the above output, I would expect to see about 1263776 avail Mem (Swap free 
> + RAM free).  
> 
> What am I missing?
> 
> -Ron

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Profile settings to limit RAM and SWAP

2017-06-27 Thread Ron Kelley
Greetings all,

Running LXD 2.14 on Ubuntu 16.04 and need to find a way to properly set limits 
for RAM and SWAP for my containers.

The goal: profile 512MB of RAM and 512MB of swap per container (total 1G)


My current profile:

config:  limits.cpu: "2"
  limits.memory: 512MB
  limits.memory.swap.priority: “1"
  raw.lxc: lxc.cgroup.memory.memsw.limit_in_bytes = 1512M




However, according to “top” and “free -m”, the available memory = (RAM - 
ProcessesUsed) and does not include swap.  Example:

top - 12:08:40 up 1 min,  0 users,  load average: 0.59, 1.12, 2.23
Tasks:  19 total,   2 running,  17 sleeping,   0 stopped,   0 zombie
%Cpu(s): 11.8 us,  4.4 sy,  0.0 ni, 82.3 id,  0.2 wa,  0.0 hi,  1.4 si,  0.0 st
KiB Mem :   524288 total,   263776 free,75492 used,   185020 buff/cache
KiB Swap:  1024000 total,  1019936 free, 4064 used.   263776 avail Mem


In the above output, I would expect to see about 1263776 avail Mem (Swap free + 
RAM free).  

What am I missing?

-Ron
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD 2.13 - Containers using lots of swap despite having free RAM

2017-06-06 Thread Ron Kelley
I don’t have a way to track the memory usage of a container yet, but this issue 
seems very consistent among the containers.  In fact, all containers have a 
higher than expected swap usage.


As a quick test, I modified the container profile to see if removing the swap 
and memory limits would help (removed the 
“lxc.cgroup.memory.memsw.limit_in_bytes” setting and changed 
"limits.memory.swap=false” ).  The odd thing now is the available memory to the 
container seems to be maxed out at the limits.memory setting and does not 
include swap:

Top output from container
-
top - 09:48:50 up 9 min,  0 users,  load average: 0.62, 0.67, 0.72
Tasks:  20 total,   1 running,  19 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.7 us,  3.8 sy,  0.0 ni, 93.2 id,  0.0 wa,  0.0 hi,  0.3 si,  0.0 st
KiB Mem :   332800 total,52252 free,99324 used,   181224 buff/cache
KiB Swap: 19737596 total, 19737596 free,0 used.52252 avail Mem
-

Notice the "52252 avail Mem”.  Seems the container is maxed out at 325MB 
despite having access to 19G swap.

Confusing…





On Jun 6, 2017, at 5:40 AM, Fajar A. Nugraha <l...@fajar.net> wrote:

On Tue, Jun 6, 2017 at 4:29 PM, Ron Kelley <rkelley...@gmail.com 
<mailto:rkelley...@gmail.com>> wrote:
(Similar to a redit post: 
https://www.reddit.com/r/LXD/comments/53l7on/how_does_lxd_manage_swap_space 
<https://www.reddit.com/r/LXD/comments/53l7on/how_does_lxd_manage_swap_space>).

Ubuntu 16.04, LXC 2.13 running about 50 containers.  System has 8G RAM and 20G 
swap.  From what I can tell, the containers are using lots of swap despite 
having free memory.


Top output from the host:
---
top - 05:23:24 up 15 days,  4:25,  2 users,  load average: 0.29, 0.45, 0.62
Tasks: 971 total,   1 running, 970 sleeping,   0 stopped,   0 zombie
%Cpu(s):  8.9 us,  8.1 sy,  0.0 ni, 81.0 id,  0.7 wa,  0.0 hi,  1.2 si,  0.0 st
KiB Mem :  8175076 total,   284892 free,  2199792 used,  5690392 buff/cache
KiB Swap: 19737596 total, 15739612 free,  3997984 used.  3599856 avail Mem
---


Top output from a container:
---
top - 09:19:47 up 10 min,  0 users,  load average: 0.52, 0.61, 0.70
Tasks:  17 total,   1 running,  16 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.3 us,  2.2 sy,  0.0 ni, 96.5 id,  0.0 wa,  0.0 hi,  1.0 si,  0.0 st
KiB Mem :   332800 total,   148212 free,79524 used,   105064 buff/cache
KiB Swap:   998400 total,   867472 free,   130928 used.   148212 avail Mem
---



Do you have history of memory usage inside the container? It's actually normal 
for linux to keep some elements in cache (e.g. inode entries), while forcing 
out program memory to swap. I'm guessing that's what happened during "busy" 
times, but now you see the non-busy times. Linux won't automatically put 
entries in swap back to memory if it's not used.

In the past I had to set vm.vfs_cache_pressure = 1000 to make linux release 
inode cache from memory. This was especially important on servers with lots of 
files. Nowadays I simply don't use swap anymore.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org <mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users 
<http://lists.linuxcontainers.org/listinfo/lxc-users>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? LXD and Kernel Samepage Merging (KSM)

2017-06-06 Thread Ron Kelley
What kind of memory savings did you get using UKSM?  Did you run it on an LXD 
server?

I finally got KSM working as expected on my containers using the ksm_preload 
technique (thanks Fajar!).  Unfortunately, the RAM savings is not as high as I 
expected.  I have about 50 containers - each running nginx and php7 (identical).

I modified the startup scripts for nginx and php7 to use the “ksm-wrapper” tool 
(which uses the ksm_preload technique) and restarted all the containers. After 
everything settled down, the best I could get was about 150MB of saved memory.  
 Much less than I expected.

I looked into UKSM, but that requires a custom kernel (either build your own or 
use one of the pre-built ones: pf-kernel).  Since these are production servers, 
I am very hesitant to use anything but a standard kernel.

-Ron





On Jun 6, 2017, at 5:13 AM, Andreas Freudenberg <lxc-us...@licomonch.net> wrote:

Hi,

as stated by Tomasz, KSM will only work for applications which support it.
If you want a KSM for all apllications you could try UKSM [1].

Worked pretty well here ..


AF



[1] http://kerneldedup.org/en/projects/uksm/introduction/

Am 05.06.2017 um 03:18 schrieb Tomasz Chmielewski:
> KSM only works with applications which support it:
> 
> KSM only operates on those areas of address space which an application 
> has advised to be likely candidates
> for merging, by using the madvise(2) system call: int madvise(addr, 
> length, MADV_MERGEABLE.
> 
> This means that doing:
> 
>echo 1 > /sys/kernel/mm/ksm/run
> 
> will be enough for KVM, but will not do anything for applications like bash, 
> nginx, apache, php-fpm and so on.
> 
> 
> Please refer to "Enabling memory deduplication libraries in containers" on 
> https://openvz.org/KSM_(kernel_same-page_merging) - you will have to use 
> ksm_preload mentioned. I haven't personally used it with LXD.
> 
> 
> Tomasz Chmielewski
> https://lxadm.com
> 
> 
> On Monday, June 05, 2017 09:48 JST, Ron Kelley <rkelley...@gmail.com> wrote: 
> 
>> Thanks Fajar. 
>> 
>> This is on-site with our own physical servers, storage, etc.  The goal is to 
>> get the most containers per server as possible.  While our servers have lots 
>> of RAM, we need to come up with a long-term scaling plan and hope KSM can 
>> help us scale beyond the standard numbers.
>> 
>> As for the openvz link; I read that a few times but I don’t get any positive 
>> results using those methods.  This leads me to believe (a) LXD does not 
>> support KSM or (b) the applications are not registering w/the KSM part of 
>> the kernel.
>> 
>> I am going to run through some tests this week to see if I can get KSM 
>> working outside the LXD environment then try to replicate the same tests 
>> inside LXD.
>> 
>> Thanks again for the feedback.
>> 
>> 
>> 
>>> On Jun 4, 2017, at 6:15 PM, Fajar A. Nugraha <l...@fajar.net> wrote:
>>> 
>>> On Sun, Jun 4, 2017 at 11:16 PM, Ron Kelley <rkelley...@gmail.com> wrote:
>>> (Reviving the thread about Container Scaling:  
>>> https://lists.linuxcontainers.org/pipermail/lxc-users/2016-May/011607.html)
>>> 
>>> We have hit critical mass with LXD 2.12 and I need to get Kernel Samepage 
>>> Merging (KSM) working as soon as possible.  All my research has come to a 
>>> dead-end, and I am reaching out to the group at large for suggestions.
>>> 
>>> Background: We have 5 host servers - each running U16.04 
>>> (4.4.0-57-generic), 8G RAM, 20G SWAP, and 50 containers (exact configs per 
>>> server - nginx and php 7).
>>> 
>>> 
>>> Is this a cloud, or on-site setup?
>>> 
>>> For cloud, there are a lot of options that could get you running with MUCH 
>>> more memory, which would save you lots of headaches getting KSM to work. My 
>>> favorite is EC2 spot instance on AWS.
>>> 
>>> On another note, I now setup most of my hosts with no swap, since 
>>> performance plummets whenever swap is used. YMMV.
>>> 
>>> 
>>> I am trying to get KSM working since each container is an identical replica 
>>> of the other (other than hostname/IP).  I have read a ton of information on 
>>> the ‘net about Ubuntu and KSM, yet I can’t seem to get any pages to share 
>>> on the host.  I am not sure if this is a KSM config issue or if LXD won’t 
>>> allow KSM between containers.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Here is what I have done thus far:
>>> --
>>> * Installed the ksmtuned utility and verifie

[lxc-users] LXD 2.13 - Containers using lots of swap despite having free RAM

2017-06-06 Thread Ron Kelley
(Similar to a redit post: 
https://www.reddit.com/r/LXD/comments/53l7on/how_does_lxd_manage_swap_space).

Ubuntu 16.04, LXC 2.13 running about 50 containers.  System has 8G RAM and 20G 
swap.  From what I can tell, the containers are using lots of swap despite 
having free memory.  


Top output from the host:
---
top - 05:23:24 up 15 days,  4:25,  2 users,  load average: 0.29, 0.45, 0.62
Tasks: 971 total,   1 running, 970 sleeping,   0 stopped,   0 zombie
%Cpu(s):  8.9 us,  8.1 sy,  0.0 ni, 81.0 id,  0.7 wa,  0.0 hi,  1.2 si,  0.0 st
KiB Mem :  8175076 total,   284892 free,  2199792 used,  5690392 buff/cache
KiB Swap: 19737596 total, 15739612 free,  3997984 used.  3599856 avail Mem
---


Top output from a container:
---
top - 09:19:47 up 10 min,  0 users,  load average: 0.52, 0.61, 0.70
Tasks:  17 total,   1 running,  16 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.3 us,  2.2 sy,  0.0 ni, 96.5 id,  0.0 wa,  0.0 hi,  1.0 si,  0.0 st
KiB Mem :   332800 total,   148212 free,79524 used,   105064 buff/cache
KiB Swap:   998400 total,   867472 free,   130928 used.   148212 avail Mem
---


The profile associated with the container:
---
root@Container-001:/usr/local/tmp# lxc profile show WP_Default
config:
  limits.cpu: "2"
  limits.memory: 325MB
  limits.memory.swap: "true"
  raw.lxc: lxc.cgroup.memory.memsw.limit_in_bytes = 1300M
description: ""
devices:
  eth0:
name: eth0
nictype: macvlan
parent: eth1.2005
type: nic
  root:
path: /
pool: default
type: disk
name: WP_Default
---


As it stands now, the host is using 4G of swap and the "kswapd0” program is 
using lots of CPU.  In fact, I have a cron job to clear the VM cache every 
5mins (/bin/echo 1 > /proc/sys/vm/drop_caches).

Any pointers?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD and Kernel Samepage Merging (KSM)

2017-06-04 Thread Ron Kelley
Thanks for the feedback and info.  Seems something is amiss with my setup.

What does your Ubuntu install look like?  I have a standard Ubuntu 16.04 with 
up-to-date patches and with LXD 2.12.  Guess I need to find the differences 
between your setup and mine.

Thanks for any help!

-Ron



> On Jun 4, 2017, at 9:58 PM, Fajar A. Nugraha <l...@fajar.net> wrote:
> 
> On Mon, Jun 5, 2017 at 7:48 AM, Ron Kelley <rkelley...@gmail.com> wrote:
> 
> As for the openvz link; I read that a few times but I don’t get any positive 
> results using those methods.  This leads me to believe (a) LXD does not 
> support KSM or (b) the applications are not registering w/the KSM part of the 
> kernel.
> 
> 
> 
> Works for me here on a simple test, in 2 privileged containers.
> 
> # grep -H '' /sys/kernel/mm/ksm/*
> /sys/kernel/mm/ksm/full_scans:4488
> /sys/kernel/mm/ksm/merge_across_nodes:1
> /sys/kernel/mm/ksm/pages_shared:3
> /sys/kernel/mm/ksm/pages_sharing:3
> /sys/kernel/mm/ksm/pages_to_scan:100
> /sys/kernel/mm/ksm/pages_unshared:18
> /sys/kernel/mm/ksm/pages_volatile:0
> /sys/kernel/mm/ksm/run:1
> /sys/kernel/mm/ksm/sleep_millisecs:20
> /sys/kernel/mm/ksm/use_zero_pages:0
> 
> - build ksm-wrapper from source
> - patch ksm-wrapper as mentioned on 
> https://github.com/unbrice/ksm_preload/issues/6
> - on host, run 'echo 1 > /sys/kernel/mm/ksm/run'
> - on container 1 & 2 , run 'ksm-wrapper bash', and the (on both bash prompt) 
> run htop
> 
> pages_shared and pages_sharing will have non-zero value when htop is running 
> on both containers
> 
> -- 
> Fajar
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? LXD and Kernel Samepage Merging (KSM)

2017-06-04 Thread Ron Kelley
Thanks.  I tried the ksm_preload option but it does not seem to work.  As I 
mentioned below, I had to creat the ksm_preload and ksm-wrapper tools manually 
since I could not find them in the “normal” Ubuntu repositories.

Does anyone know if the ksm_preload libraries are native to Ubuntu (CentOS 
calls it the "ksm_preload-0.10-3.el6.x86_64.rpm” package)?







> On Jun 4, 2017, at 9:18 PM, Tomasz Chmielewski <man...@wpkg.org> wrote:
> 
> KSM only works with applications which support it:
> 
> KSM only operates on those areas of address space which an application 
> has advised to be likely candidates
> for merging, by using the madvise(2) system call: int madvise(addr, 
> length, MADV_MERGEABLE.
> 
> This means that doing:
> 
>echo 1 > /sys/kernel/mm/ksm/run
> 
> will be enough for KVM, but will not do anything for applications like bash, 
> nginx, apache, php-fpm and so on.
> 
> 
> Please refer to "Enabling memory deduplication libraries in containers" on 
> https://openvz.org/KSM_(kernel_same-page_merging) - you will have to use 
> ksm_preload mentioned. I haven't personally used it with LXD.
> 
> 
> Tomasz Chmielewski
> https://lxadm.com
> 
> 
> On Monday, June 05, 2017 09:48 JST, Ron Kelley <rkelley...@gmail.com> wrote: 
> 
>> Thanks Fajar. 
>> 
>> This is on-site with our own physical servers, storage, etc.  The goal is to 
>> get the most containers per server as possible.  While our servers have lots 
>> of RAM, we need to come up with a long-term scaling plan and hope KSM can 
>> help us scale beyond the standard numbers.
>> 
>> As for the openvz link; I read that a few times but I don’t get any positive 
>> results using those methods.  This leads me to believe (a) LXD does not 
>> support KSM or (b) the applications are not registering w/the KSM part of 
>> the kernel.
>> 
>> I am going to run through some tests this week to see if I can get KSM 
>> working outside the LXD environment then try to replicate the same tests 
>> inside LXD.
>> 
>> Thanks again for the feedback.
>> 
>> 
>> 
>>> On Jun 4, 2017, at 6:15 PM, Fajar A. Nugraha <l...@fajar.net> wrote:
>>> 
>>> On Sun, Jun 4, 2017 at 11:16 PM, Ron Kelley <rkelley...@gmail.com> wrote:
>>> (Reviving the thread about Container Scaling:  
>>> https://lists.linuxcontainers.org/pipermail/lxc-users/2016-May/011607.html)
>>> 
>>> We have hit critical mass with LXD 2.12 and I need to get Kernel Samepage 
>>> Merging (KSM) working as soon as possible.  All my research has come to a 
>>> dead-end, and I am reaching out to the group at large for suggestions.
>>> 
>>> Background: We have 5 host servers - each running U16.04 
>>> (4.4.0-57-generic), 8G RAM, 20G SWAP, and 50 containers (exact configs per 
>>> server - nginx and php 7).
>>> 
>>> 
>>> Is this a cloud, or on-site setup?
>>> 
>>> For cloud, there are a lot of options that could get you running with MUCH 
>>> more memory, which would save you lots of headaches getting KSM to work. My 
>>> favorite is EC2 spot instance on AWS.
>>> 
>>> On another note, I now setup most of my hosts with no swap, since 
>>> performance plummets whenever swap is used. YMMV.
>>> 
>>> 
>>> I am trying to get KSM working since each container is an identical replica 
>>> of the other (other than hostname/IP).  I have read a ton of information on 
>>> the ‘net about Ubuntu and KSM, yet I can’t seem to get any pages to share 
>>> on the host.  I am not sure if this is a KSM config issue or if LXD won’t 
>>> allow KSM between containers.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Here is what I have done thus far:
>>> --
>>> * Installed the ksmtuned utility and verified ksmd is running on each host.
>>> * Created the ksm_preload and ksm-wrapper tools per this site (the 
>>> https://github.com/unbrice/ksm_preload).
>>> * Created 50 identical Ubuntu 16.04 containers running nginx
>>> * Modified the nginx startup script on each container to include the 
>>> ksm_preload.so library; no issues running nginx.
>>> 
>>> (Note: since I could not find the ksm_preload library for Ubuntu, I had to 
>>> use the ksm-wrapper tool listed above)
>>> 
>>> All the relevant files under /sys/kernel/mm/ksm still show 0 (pages_shared, 
>>> pages_sharing, etc) regardless of what I do.
>>> 
>>

Re: [lxc-users] LXD and Kernel Samepage Merging (KSM)

2017-06-04 Thread Ron Kelley
Thanks Fajar. 

This is on-site with our own physical servers, storage, etc.  The goal is to 
get the most containers per server as possible.  While our servers have lots of 
RAM, we need to come up with a long-term scaling plan and hope KSM can help us 
scale beyond the standard numbers.

As for the openvz link; I read that a few times but I don’t get any positive 
results using those methods.  This leads me to believe (a) LXD does not support 
KSM or (b) the applications are not registering w/the KSM part of the kernel.

I am going to run through some tests this week to see if I can get KSM working 
outside the LXD environment then try to replicate the same tests inside LXD.

Thanks again for the feedback.



> On Jun 4, 2017, at 6:15 PM, Fajar A. Nugraha <l...@fajar.net> wrote:
> 
> On Sun, Jun 4, 2017 at 11:16 PM, Ron Kelley <rkelley...@gmail.com> wrote:
> (Reviving the thread about Container Scaling:  
> https://lists.linuxcontainers.org/pipermail/lxc-users/2016-May/011607.html)
> 
> We have hit critical mass with LXD 2.12 and I need to get Kernel Samepage 
> Merging (KSM) working as soon as possible.  All my research has come to a 
> dead-end, and I am reaching out to the group at large for suggestions.
> 
> Background: We have 5 host servers - each running U16.04 (4.4.0-57-generic), 
> 8G RAM, 20G SWAP, and 50 containers (exact configs per server - nginx and php 
> 7).
> 
> 
> Is this a cloud, or on-site setup?
> 
> For cloud, there are a lot of options that could get you running with MUCH 
> more memory, which would save you lots of headaches getting KSM to work. My 
> favorite is EC2 spot instance on AWS.
> 
> On another note, I now setup most of my hosts with no swap, since performance 
> plummets whenever swap is used. YMMV.
> 
>  
> I am trying to get KSM working since each container is an identical replica 
> of the other (other than hostname/IP).  I have read a ton of information on 
> the ‘net about Ubuntu and KSM, yet I can’t seem to get any pages to share on 
> the host.  I am not sure if this is a KSM config issue or if LXD won’t allow 
> KSM between containers.
> 
> 
> 
>  
> 
> Here is what I have done thus far:
> --
> * Installed the ksmtuned utility and verified ksmd is running on each host.
> * Created the ksm_preload and ksm-wrapper tools per this site (the 
> https://github.com/unbrice/ksm_preload).
> * Created 50 identical Ubuntu 16.04 containers running nginx
> * Modified the nginx startup script on each container to include the 
> ksm_preload.so library; no issues running nginx.
> 
> (Note: since I could not find the ksm_preload library for Ubuntu, I had to 
> use the ksm-wrapper tool listed above)
> 
> All the relevant files under /sys/kernel/mm/ksm still show 0 (pages_shared, 
> pages_sharing, etc) regardless of what I do.
> 
> 
> Can any (@stgraber @brauner) confirm if KSM is supported with LXD?  If so, 
> what is the “magic” to make it work?  We really want to get 2-3x more sites 
> per container if possible.
> 
> 
> Have you read https://openvz.org/KSM_(kernel_same-page_merging) ? Some info 
> might be relevant. For example, it mentions something which you did not wrote:
> 
> To start ksmd, issue
> [root@HN ~]# echo 1 > /sys/kernel/mm/ksm/run
> 
> Also the section about Tuning and Caveats.
> 
> -- 
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD and Kernel Samepage Merging (KSM)

2017-06-04 Thread Ron Kelley
(Reviving the thread about Container Scaling:  
https://lists.linuxcontainers.org/pipermail/lxc-users/2016-May/011607.html)

We have hit critical mass with LXD 2.12 and I need to get Kernel Samepage 
Merging (KSM) working as soon as possible.  All my research has come to a 
dead-end, and I am reaching out to the group at large for suggestions.

Background: We have 5 host servers - each running U16.04 (4.4.0-57-generic), 8G 
RAM, 20G SWAP, and 50 containers (exact configs per server - nginx and php 7).

I am trying to get KSM working since each container is an identical replica of 
the other (other than hostname/IP).  I have read a ton of information on the 
‘net about Ubuntu and KSM, yet I can’t seem to get any pages to share on the 
host.  I am not sure if this is a KSM config issue or if LXD won’t allow KSM 
between containers.


Here is what I have done thus far:
--
* Installed the ksmtuned utility and verified ksmd is running on each host.
* Created the ksm_preload and ksm-wrapper tools per this site (the 
https://github.com/unbrice/ksm_preload).
* Created 50 identical Ubuntu 16.04 containers running nginx
* Modified the nginx startup script on each container to include the 
ksm_preload.so library; no issues running nginx.

(Note: since I could not find the ksm_preload library for Ubuntu, I had to use 
the ksm-wrapper tool listed above)

All the relevant files under /sys/kernel/mm/ksm still show 0 (pages_shared, 
pages_sharing, etc) regardless of what I do.


Can any (@stgraber @brauner) confirm if KSM is supported with LXD?  If so, what 
is the “magic” to make it work?  We really want to get 2-3x more sites per 
container if possible.


Thanks.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD - upgrade from 2.08 to 2.12 enables BTRFS quota

2017-06-04 Thread Ron Kelley
Greetings all.

I recently upgraded a number of LXD 2.08 servers to 2.12 and noticed the btrfs 
“quota” option was enabled where it was disabled before.  Enabling quotas on a 
filesystem with lots of snapshots can cause huge performance issues (as 
indicated by our 20min outage this morning when I tried to clear out some old 
snapshots).

Can one of the developers confirm if the upgrade enables quota?  If so, I will 
file a bug to ensure the user gets notified/alerted quotas will be enabled (so 
it can be disabled if necessary).

-Ron
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Broken on Gentoo linux-4.8.17-hardened-r2, LXD 2.11, and lxc 1.0.8

2017-06-01 Thread Ron Kelley
Mike,

I don’t know anything about Gentoo Linux, but can you consider using Ubuntu 16 
or 17 as your LXD host?  These OS installs definitely support LXD out of the 
box.

As for the network bridge question; You don’t have to connect any network 
interface to the container.  You can simply remove the network interface from 
the LXD profile (or via the “lxc network detach” command)



-Ron


> On May 18, 2017, at 6:29 PM, Michael Johnson  wrote:
> 
> Hi All.
> 
> I'm very new to lxd and having very little success.
> 
> What is the absolute bare minimum required to get a container up?
> 
> I've installed lxd.
> 
> I've started lxd.
> 
> When I run: lxd init, if I answer all the question with default, I get this:
> 
> error: Failed to run: iptables -w -t mangle -I POSTROUTING -o lxdbr0 -p
> udp --dport 68 -j CHECKSUM --checksum-fill -m comment --comment
> generated for LXD network lxdbr0: iptables: No chain/target/match by
> that name.
> 
> When I run: lxc launch images:centos/7/amd64 centos
> 
> I get this:
> 
> error: Failed to run: /usr/sbin/lxd forkstart centos
> /var/lib/lxd/containers /var/log/lxd/centos/lxc.conf
> 
> and the exact failure seems to be:
> 
> lxc_container 1495144728.829 ERRORlxc_start - start.c:lxc_spawn:975
> - failed to set up id mapping
> 
> What am I doing wrong? Or is this a bug? I've seen some bug report about
> failure to set up id mapping but that was in an older version and
> presumably was fixed.
> 
> Additionally, is it a rigid requirement to configure a network bridge or
> macvlan just to bring up a container?
> 
> Thanks for any direction or help!
> Regards,
> Mike
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Can I setup a private nat ipv4 and a public ipv6 address at same time for a lxc2 container?

2017-06-01 Thread Ron Kelley
How about adding two NICs to the container:  one for private networking (via 
lxdbridge) and one for public networking (via macvlan)?


> On May 31, 2017, at 10:31 PM, littlebat  wrote:
> 
> Hi, 
> Thanks for all of your help for building so cool thing - lxc.
> 
> I have studied my question several days and searched many online resource, 
> but didn't resolve this. The detail is too long, I describe a brief version 
> below:
> 
> I have a debian 9 host server installed lxc2 server, the host server has only 
> one pulic ipv4 address, suppose it is 8.8.8.8, and a public /64 subnet ipv6 
> pool, suppose it is 8:8:8:8::/64, and the eth0 of host ipv6 is: 
> 8:8:8:8::1/64. 
> 
> My goal is building the lxc unprivileged container, with a private nat ipv4 
> address, suppose it is 10.1.0.10, so I use ip forward to access container 
> from internet using public ipv4 plus port (suppose 8.8.8.8: forward 
> to/from 10.1.0.10:22). And, at same time, I want assign container a public 
> ipv6 address or ipv6 subnet( /112, can it be public accessed? ), so I can 
> access container from internet using public ipv6(suppose 8:8:8:8::10/64 port 
> 22 or 8:8:8:8::10/112 port 22 ? ). For simplifing question, suppose only 
> assign a public ipv6 (not a public ipv6 subnet) address to the container.
> 
> Util today, I can only setup both private nat ipv4(10.1.0.10) and private nat 
> ipv6(8:8:8:8::10/112) for the container, open ipv4 and ipv6 forward in 
> /etc/sysctl.conf, and using iptables and ip6tables to forward public traffic 
> to or from container(8.8.8.8:<->10.1.0.10:22,  8:8:8:8::1/64 port  
> <-> 8:8:8::10/112 port 22). This is done by create a "2. independent 
> bridge"(a different bridge out of thin air and link your containers together 
> on this bridge, but use forwarding to get it out on the internet or to get 
> traffic into it. debian wiki: https://wiki.debian.org/LXC/SimpleBridge). 
> reference: LXC host featuring IPv6 connectivity 
> https://blog.cepharum.de/en/post/lxc-host-featuring-ipv6-connectivity.html
> 
> And, I can create a "1. host-shared bridge"(a bridge out of your main network 
> interface which will hold both the host's IP and the container's IP 
> addresses. debian wiki: https://wiki.debian.org/LXC/SimpleBridge). Then, I 
> can assign a public ipv6 address to the container. But, I can't assign a 
> private nat ipv4 address to the container now. So, it is no way to public 
> access container using ipv4 address(because the sole public ipv4 address only 
> avalable on host network card).
> 
> My question is:
> 1, Can I setup a private nat ipv4 and a public ipv6 address at same time for 
> a lxc2 container?
> 
> 2, How to do it? 
> any idea or online resource link is welcome.
> 
> thanks.
> 
> -
> 
> Dashing Meng
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] near-live migration

2017-05-30 Thread Ron Kelley
While not a direct answer, I filled an enhancement bug recently for this 
exact topic (incremental snapshots to remote server). The enhancement was 
approved, but I don't know when it will be included in the next LXD version.



On May 30, 2017 11:52:34 AM Kees Bos  wrote:


Hi,

Right now I'm using the sequence 'stop - move - start' for migration of
containers (live migration fails too often).

The 'move' step can take some time. I wonder if it would be easy to
implement/do something like:
  - prepare move (e.g. take snapshot an copy upto snaphot)
  - stop
  - copy the rest
  - remove snapshot on dst
  - remove container from src
  - start container on dst

That would minimize downtime without the complexity of a live
migration.

What are your thoughts?

Kees
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Bind public IP that is available on host's ens3:1 to a specific LXD container?

2017-05-20 Thread Ron Kelley
Great suggestions from Fajar.  A couple more ideas if you only have one public 
IP on your container:

* Use HAProxy on the container’s main IP address with Server Name 
Identification (SNI) and a local DNS server.  This way, all your sites are tied 
to the same IP address as the container with private addresses behind it.

* Use nginx with local DNS lookups.  Similar to haproxy except nginx redirects 
the web requests to the appropriate backend.


-Ron

> On May 20, 2017, at 9:34 AM, Fajar A. Nugraha  wrote:
> 
> On Sat, May 20, 2017 at 10:31 AM, Thomas Ward  wrote:
> I've been able to switch this to a bridged method, with the
> host interfaces set to 'manual', an inet0 bridge created that is static
> IP'd for the host system to have its primary IP, and can have manual IP
> assignments to containers on that bridged network for the other
> non-primary IPs.
> 
> 
> For sake of completeness:
> - converting eth0 to be a slave is the "standard" approach:
> https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-network
> https://help.ubuntu.com/lts/serverguide/network-configuration.html#bridging
> 
> - an easier approach is to use macvlan. Especially if the host doesn't need 
> to communicate directly with the container (which should also be what happens 
> in your case, as it appears the host on the containers are on different 
> subnet)
> https://github.com/lxc/lxd/blob/master/doc/containers.md#type-nic
> 
> - however both approach won't work if your provider limits only ONE mac 
> address on your port. In this case you'd need either proxy-arp (somewhat 
> complicated, but possible), or simply use iptables to forward all traffic for 
> the secondary IP to the container.
> 
> -- 
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD firewall container?

2017-05-05 Thread Ron Kelley
Fajar,

Just following up on this thread.  Thanks for pointing out the redundant NAT 
problem with ufw.  I found another solution to prevent this issue when 
restarting ufw (from here: https://gist.github.com/kimus/9315140 
<https://gist.github.com/kimus/9315140> in the comments section)

Adding a “-F” statement before your first NAT rule flushes the NAT - thereby 
preventing the redundant NAT entries.  Example:

---

# 
# Rules for Custom Network
# 
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]

# Flush table to prevent redundant NAT rules
-F

# Port Forwardings (change dport to match incoming port and destination:port to 
match target server behind eth1)
-A PREROUTING -d 192.168.24.5 -p tcp --dport 222 -j DNAT --to-destination 
30.1.1.3:22
-A PREROUTING -d 192.168.24.5 -p tcp --dport 801 -j DNAT --to-destination 
30.1.1.3:80
-A PREROUTING -d 192.168.24.5 -p tcp --dport 802 -j DNAT --to-destination 
30.1.1.3:443

# Use this if you have IP Aliases on the front end pointing to different 
back-end servers
-A PREROUTING -d 192.168.24.6 -p tcp --dport 222 -j DNAT --to-destination 
30.1.1.3:22

# NAT traffic from inside network (30.1.1.0/24) through eth0 to the world
-A POSTROUTING -s 30.1.1.0/24 -o eth0 -j MASQUERADE

COMMIT
...
...

...
...
---

I ran a test this morning with and without the “-F” option and verified 
everything worked as expected.

Just thought I would share with everyone.

Hope this helps.

-Ron





On Apr 27, 2017, at 8:25 PM, Fajar A. Nugraha <l...@fajar.net> wrote:

On Fri, Apr 28, 2017 at 1:05 AM, Ron Kelley <rkelley...@gmail.com 
<mailto:rkelley...@gmail.com>> wrote:
Thanks for the feedback, Spike.  After looking around for a while, I, too, 
decided a small ubuntu container with a minimal firewall tool is the way to go. 
 In my case, I used “ufw” but will also look at "firehol”.

Our firewall/NAT requirements are not very large, and I finally figured out the 
right set of rules we need.  In essence, we just need to add these to the 
/etc/ufw/before.rules file and restart ufw:


with ONLY changes to /etc/ufw/before.rules, the NAT rules would be reapplied 
(resulting multiple rules on NAT table) whenever you restart ufw. No big deal 
if you plan to restart the container anyway on every rule change (or never plan 
to change the rules), but not ideal if your plan is to use "ufw reload".

In my case I had to separate ufw NAT rules into a new custom chain, 
ufw-before-prerouting: 


- edit /etc/ufw/before.init (copy it from /usr/share/ufw/before.init), and make 
it executable (e.g. chmod 700). Snippet of edited lines:

start)
iptables -t nat -N ufw-before-prerouting || true
iptables -t nat -I PREROUTING -j ufw-before-prerouting || true
;;
stop)
iptables -t nat -D PREROUTING -j ufw-before-prerouting || true
iptables -t nat -F ufw-before-prerouting || true
iptables -t nat -X ufw-before-prerouting || true
;;



- add NAT lines to /etc/ufw/before.rules to look similar to this:

# nat Table rules
*nat
:ufw-before-prerouting - [0:0]

# DNAT example
-A ufw-before-prerouting -i eth0 -p tcp --dport 21122 -j DNAT --to 
10.0.3.211:22 <http://10.0.3.211:22/>


-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD firewall container?

2017-04-27 Thread Ron Kelley
Thanks Fajar.  Always appreciate seeing other people’s input on stuff like this.


> On Apr 27, 2017, at 8:25 PM, Fajar A. Nugraha <l...@fajar.net> wrote:
> 
> On Fri, Apr 28, 2017 at 1:05 AM, Ron Kelley <rkelley...@gmail.com> wrote:
> Thanks for the feedback, Spike.  After looking around for a while, I, too, 
> decided a small ubuntu container with a minimal firewall tool is the way to 
> go.  In my case, I used “ufw” but will also look at "firehol”.
> 
> Our firewall/NAT requirements are not very large, and I finally figured out 
> the right set of rules we need.  In essence, we just need to add these to the 
> /etc/ufw/before.rules file and restart ufw:
> 
> 
> with ONLY changes to /etc/ufw/before.rules, the NAT rules would be reapplied 
> (resulting multiple rules on NAT table) whenever you restart ufw. No big deal 
> if you plan to restart the container anyway on every rule change (or never 
> plan to change the rules), but not ideal if your plan is to use "ufw reload".
> 
> In my case I had to separate ufw NAT rules into a new custom chain, 
> ufw-before-prerouting: 
> 
> 
> - edit /etc/ufw/before.init (copy it from /usr/share/ufw/before.init), and 
> make it executable (e.g. chmod 700). Snippet of edited lines:
> 
> start)
> iptables -t nat -N ufw-before-prerouting || true
> iptables -t nat -I PREROUTING -j ufw-before-prerouting || true
> ;;
> stop)
> iptables -t nat -D PREROUTING -j ufw-before-prerouting || true
> iptables -t nat -F ufw-before-prerouting || true
> iptables -t nat -X ufw-before-prerouting || true
> ;;
> 
> 
> 
> - add NAT lines to /etc/ufw/before.rules to look similar to this:
> 
> # nat Table rules
> *nat
> :ufw-before-prerouting - [0:0]
> 
> # DNAT example
> -A ufw-before-prerouting -i eth0 -p tcp --dport 21122 -j DNAT --to 
> 10.0.3.211:22
> 
> 
> -- 
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD firewall container?

2017-04-27 Thread Ron Kelley
Thanks for the feedback, Spike.  After looking around for a while, I, too, 
decided a small ubuntu container with a minimal firewall tool is the way to go. 
 In my case, I used “ufw” but will also look at "firehol”.  

Our firewall/NAT requirements are not very large, and I finally figured out the 
right set of rules we need.  In essence, we just need to add these to the 
/etc/ufw/before.rules file and restart ufw:


*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]

# Port Forwardings (change dport to match incoming port and destination:port to 
match target server behind eth1)
-A PREROUTING -d 192.168.24.5 -p tcp --dport 222 -j DNAT --to-destination 
30.1.1.3:22
-A PREROUTING -d 192.168.24.5 -p tcp --dport 801 -j DNAT --to-destination 
30.1.1.3:80
-A PREROUTING -d 192.168.24.5 -p tcp --dport 802 -j DNAT --to-destination 
30.1.1.3:443

# Use this if you have IP Aliases on the front end pointing to different 
back-end servers
-A PREROUTING -d 192.168.24.6 -p tcp --dport 222 -j DNAT --to-destination 
30.1.1.3:22

# NAT traffic from inside network (30.1.1.0/24) through eth0 to the world
-A POSTROUTING -s 30.1.1.0/24 -o eth0 -j MASQUERADE

COMMIT


The above simply says our NAT router (192.168.24.5) sits in front of a number 
of private IPs (30.1.1.0/24) and provides port forwarding as well as outbound 
NAT.The “IP Alias” line can be used in case we need additional front-end 
IPs (i.e. 192.168.24.6).

Seems to work very well so far.

Thanks for all the feedback!

-Ron





> On Apr 27, 2017, at 1:50 PM, Spike <sp...@drba.org> wrote:
> 
> after testing one of too many firewall solutions I went back to just running 
> plain ubuntu and then put an iptables "frontend" on top of it. In my case I 
> chose firehol, but there's a number of them and it's largely a matter of 
> taste/how you work. It really depends what you care for, if you want an 
> appliance kind of thing that won't work, as it doesn't come with batteries 
> included, ie a gui, graphs etc, but if you want a clean working firehol 
> without the hassle of managing rules yourself, then ubuntu + a fw manager 
> will do wonders and actually keeps things simpler ime.
> 
> hope that helps,
> 
> Spike
> 
> On Mon, Apr 24, 2017 at 10:07 PM gunnar.wagner <gunnar.wag...@netcologne.de> 
> wrote:
> I know that's only touching your point slightly but (as far as I know) 
> pfSense requires 2 physical WAN ports in order to run. 
> So I'd doubt is can be containerized to begin with
> 
> 
> On 4/25/2017 12:10 AM, Ron Kelley wrote:
>> Greetings all,
>> 
>> I am looking for an easy-to-configure firewall tool that provides 
>> NAT/Gateway/Firewall functions for other containers.  I know I can use 
>> iptables, etc, but I would like something more easily managed (web-based 
>> tool?) like pfSense, IPFire, IPCop, etc.  Unfortunately, many of the tools 
>> are ISO based which require “real” VM instances.  
>> 
>> I can’t seem to find any turn-key LXD firewall images; maybe I am looking in 
>> the wrong place?
>> 
>> Any pointers?
>> 
>> Thanks.
>> ___
>> lxc-users mailing list
>> 
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> -- 
> Gunnar Wagner | Yongfeng Village Group 12 #5, Pujiang Town, Minhang District, 
> 201112 Shanghai, P.R. CHINA 
> mob +86.159.0094.1702 | skype: professorgunrad | wechat: 15900941702
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc network create - macvlan

2017-04-26 Thread Ron Kelley
Good suggestion, but same result:

lxc network attach QA-Server-03:vxlan.1101 QA-Server-03:centos-vm4
error: not found


On the remote server (QA-Server-03):
--
root@QA-Server-03:~# ip link |grep 1101
4: vxlan.1101: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state 
UNKNOWN mode DEFAULT group default qlen 1000

The network interface is definitely there.



> On Apr 26, 2017, at 4:02 PM, Stéphane Graber <stgra...@ubuntu.com> wrote:
> 
> On Wed, Apr 26, 2017 at 03:55:58PM -0400, Ron Kelley wrote:
>> Thanks.  It seems the command works for the local server only.  When I try 
>> to do this from a remote server ("lxc network attach vxlan.1101 
>> QA-Server-04:centos-vm4), I get “error: not found”.  Is that expected?
> 
> Did you try?
> 
>   lxc network attach QA-Server-04:vxlan.1101 QA-Server-04:centos-vm4
> 
> 
>> Also, it seems the command, "lxc network show :” has a bug as 
>> the command returns with “error: json: cannot unmarshal array into Go value 
>> of type api.Network".  I have just filed a bug (#3230) to track this issue.
>> 
>> 
>> 
>> 
>> 
>> 
>>> On Apr 26, 2017, at 3:49 PM, Stéphane Graber <stgra...@ubuntu.com> wrote:
>>> 
>>> On Wed, Apr 26, 2017 at 03:37:25PM -0400, Ron Kelley wrote:
>>>> Trying to create a macvlan network type using LXD 2.12 but can’t figure 
>>>> out the syntax.  I have a number of vxlan interfaces created via the “ip 
>>>> link” command and would like to create the corresponding LXD networks 
>>>> without having to create separate profiles.  I tried the commands "lxc 
>>>> network create vxlan1101 type=physical” and "lxc network create vxlan1101 
>>>> nictype=macvlan”.  Each time, I get "error: Invalid network configuration 
>>>> key:”
>>>> 
>>>> I looked at the network configuration document 
>>>> (https://github.com/lxc/lxd/blob/master/doc/networks.md) but don’t see 
>>>> anywhere to specify a nic type.
>>>> 
>>>> For what it’s worth, creating a new profile using the “nictype:macvlan" 
>>>> and "parent: vxlan.1101” key values works just fine.
>>>> 
>>>> 
>>>> Any pointers?
>>> 
>>> LXD only manages bridges. You can make a bridge that's connected to your
>>> macvlan device with:
>>> 
>>> lxc network create br-vxlan1101 ipv4.address=none ipv6.address=none 
>>> bridge.external_interfaces=vxlan.1101
>>> 
>>> Which you can then attach containers to.
>>> 
>>> 
>>> But it sounds like just attaching the container directly using macvlan 
>>> would be easiest:
>>> 
>>> lxc network attach vxlan.1101 
>>> 
>>> -- 
>>> Stéphane Graber
>>> Ubuntu developer
>>> http://www.ubuntu.com
>>> ___
>>> lxc-users mailing list
>>> lxc-users@lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>> 
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> -- 
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc network create - macvlan

2017-04-26 Thread Ron Kelley
Thanks.  It seems the command works for the local server only.  When I try to 
do this from a remote server ("lxc network attach vxlan.1101 
QA-Server-04:centos-vm4), I get “error: not found”.  Is that expected?


Also, it seems the command, "lxc network show :” has a bug as 
the command returns with “error: json: cannot unmarshal array into Go value of 
type api.Network".  I have just filed a bug (#3230) to track this issue.






> On Apr 26, 2017, at 3:49 PM, Stéphane Graber <stgra...@ubuntu.com> wrote:
> 
> On Wed, Apr 26, 2017 at 03:37:25PM -0400, Ron Kelley wrote:
>> Trying to create a macvlan network type using LXD 2.12 but can’t figure out 
>> the syntax.  I have a number of vxlan interfaces created via the “ip link” 
>> command and would like to create the corresponding LXD networks without 
>> having to create separate profiles.  I tried the commands "lxc network 
>> create vxlan1101 type=physical” and "lxc network create vxlan1101 
>> nictype=macvlan”.  Each time, I get "error: Invalid network configuration 
>> key:”
>> 
>> I looked at the network configuration document 
>> (https://github.com/lxc/lxd/blob/master/doc/networks.md) but don’t see 
>> anywhere to specify a nic type.
>> 
>> For what it’s worth, creating a new profile using the “nictype:macvlan" and 
>> "parent: vxlan.1101” key values works just fine.
>> 
>> 
>> Any pointers?
> 
> LXD only manages bridges. You can make a bridge that's connected to your
> macvlan device with:
> 
>  lxc network create br-vxlan1101 ipv4.address=none ipv6.address=none 
> bridge.external_interfaces=vxlan.1101
> 
> Which you can then attach containers to.
> 
> 
> But it sounds like just attaching the container directly using macvlan would 
> be easiest:
> 
>  lxc network attach vxlan.1101 
> 
> -- 
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc network create - macvlan

2017-04-26 Thread Ron Kelley
Trying to create a macvlan network type using LXD 2.12 but can’t figure out the 
syntax.  I have a number of vxlan interfaces created via the “ip link” command 
and would like to create the corresponding LXD networks without having to 
create separate profiles.  I tried the commands "lxc network create vxlan1101 
type=physical” and "lxc network create vxlan1101 nictype=macvlan”.  Each time, 
I get "error: Invalid network configuration key:”

I looked at the network configuration document 
(https://github.com/lxc/lxd/blob/master/doc/networks.md) but don’t see anywhere 
to specify a nic type.

For what it’s worth, creating a new profile using the “nictype:macvlan" and 
"parent: vxlan.1101” key values works just fine.


Any pointers?


-Ron
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc info - how to show connected network interface

2017-04-26 Thread Ron Kelley
Thanks Stéphane - really appreciate the fast reply!

-Ron



> On Apr 26, 2017, at 2:40 PM, Stéphane Graber <stgra...@ubuntu.com> wrote:
> 
> Ah yeah, you can do that or even "lxc config device list" if all you
> want to do is validate the LXD config rather than what's actually in the
> container's namespace.
> 
> On Wed, Apr 26, 2017 at 02:35:10PM -0400, Ron Kelley wrote:
>> As soon as I get “send” I found the command:  lxc config show 
>> 
>> 
>> Sorry for the noise…
>> 
>> 
>> 
>>> On Apr 26, 2017, at 2:33 PM, Ron Kelley <rkelley...@gmail.com> wrote:
>>> 
>>> Running LXD 2.12 on a couple of Ubuntu 17.04 servers with a local “manager” 
>>> node and a remote worker node.  I started a remote CentOS 6 container on 
>>> the worker node then attached a network via "lxc network attach eth1 
>>> LXD-Server-01:centos6-testing” (the default profile does not have a network 
>>> interface configured).  To verify the new interface was attached properly, 
>>> I ran "lxc info --verbose LXD-Server-01:centos6-testing” but the network 
>>> details are not listed.  Here is the output:
>>> 
>>> ---
>>> Name: centos6-testing
>>> Remote: https://10.1.2.3:8443
>>> Architecture: x86_64
>>> Created: 2017/04/26 18:12 UTC
>>> Status: Running
>>> Type: persistent
>>> Profiles: default
>>> Pid: 1878
>>> Ips:
>>> lo: inet127.0.0.1
>>> lo: inet6   ::1
>>> Resources:
>>> Processes: 6
>>> CPU usage:
>>>   CPU usage (in seconds): 0
>>> Memory usage:
>>>   Memory (current): 13.57MB
>>>   Memory (peak): 14.88MB
>>> Network usage:
>>>   eth0:
>>> Bytes received: 0B
>>> Bytes sent: 0B
>>> Packets received: 0
>>> Packets sent: 0
>>>   lo:
>>> Bytes received: 0B
>>> Bytes sent: 0B
>>> Packets received: 0
>>> Packets sent: 0
>>> ---
>>> 
>>> Is there any way to verify which network is currently attached to the 
>>> container?
>>> 
>>> Thanks.
>>> 
>>> -Ron
>> 
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> -- 
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc info - how to show connected network interface

2017-04-26 Thread Ron Kelley
As soon as I get “send” I found the command:  lxc config show 

Sorry for the noise…



> On Apr 26, 2017, at 2:33 PM, Ron Kelley <rkelley...@gmail.com> wrote:
> 
> Running LXD 2.12 on a couple of Ubuntu 17.04 servers with a local “manager” 
> node and a remote worker node.  I started a remote CentOS 6 container on the 
> worker node then attached a network via "lxc network attach eth1 
> LXD-Server-01:centos6-testing” (the default profile does not have a network 
> interface configured).  To verify the new interface was attached properly, I 
> ran "lxc info --verbose LXD-Server-01:centos6-testing” but the network 
> details are not listed.  Here is the output:
> 
> ---
> Name: centos6-testing
> Remote: https://10.1.2.3:8443
> Architecture: x86_64
> Created: 2017/04/26 18:12 UTC
> Status: Running
> Type: persistent
> Profiles: default
> Pid: 1878
> Ips:
>  lo:  inet127.0.0.1
>  lo:  inet6   ::1
> Resources:
>  Processes: 6
>  CPU usage:
>CPU usage (in seconds): 0
>  Memory usage:
>Memory (current): 13.57MB
>Memory (peak): 14.88MB
>  Network usage:
>eth0:
>  Bytes received: 0B
>  Bytes sent: 0B
>  Packets received: 0
>  Packets sent: 0
>lo:
>  Bytes received: 0B
>  Bytes sent: 0B
>  Packets received: 0
>  Packets sent: 0
> ---
> 
> Is there any way to verify which network is currently attached to the 
> container?
> 
> Thanks.
> 
> -Ron

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc info - how to show connected network interface

2017-04-26 Thread Ron Kelley
Running LXD 2.12 on a couple of Ubuntu 17.04 servers with a local “manager” 
node and a remote worker node.  I started a remote CentOS 6 container on the 
worker node then attached a network via "lxc network attach eth1 
LXD-Server-01:centos6-testing” (the default profile does not have a network 
interface configured).  To verify the new interface was attached properly, I 
ran "lxc info --verbose LXD-Server-01:centos6-testing” but the network details 
are not listed.  Here is the output:

---
Name: centos6-testing
Remote: https://10.1.2.3:8443
Architecture: x86_64
Created: 2017/04/26 18:12 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 1878
Ips:
  lo:   inet127.0.0.1
  lo:   inet6   ::1
Resources:
  Processes: 6
  CPU usage:
CPU usage (in seconds): 0
  Memory usage:
Memory (current): 13.57MB
Memory (peak): 14.88MB
  Network usage:
eth0:
  Bytes received: 0B
  Bytes sent: 0B
  Packets received: 0
  Packets sent: 0
lo:
  Bytes received: 0B
  Bytes sent: 0B
  Packets received: 0
  Packets sent: 0
---

Is there any way to verify which network is currently attached to the container?

Thanks.

-Ron
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] discuss.linuxcontainers.org experiment

2017-04-25 Thread Ron Kelley
Stéphane,

Thanks for setting up the discussion group.  I just joined…

As a suggestion, it would be great if we could have an official “best 
practices” section supported/endorsed by the Canonical team.  Or, a section 
whereby people can contribute their designs and others can add their 
viewpoints.  I know many people use LXC/LXD for home/personal use, but many of 
use are using this technology in data center production environments.

Some ideas off the top of my head:
* How to manage tens/hundreds of LXD servers (single host, multi-host, or 
multi-geo locations)
* How to quickly find mis-behaving containers (consuming too much resources, 
etc)
* How to get container run-time stats per LXD server
* Best practices when backing up, restoring, cloning containers
* Best practices when deploying containers (same UID, different UID per 
container, etc)

As we adopt LXD more and more in our DC designs, it becomes increasingly 
important for our organization to leverage best practices from the industry 
experts. 

Thanks,

-Ron


On Apr 25, 2017, at 1:50 PM, Stéphane Graber  wrote:
> 
> Hey there,
> 
> We know that not everyone enjoys mailing-lists and searching through
> mailing-list archives and would rather use a platform that's dedicated
> to discussion and support.
> 
> We don't know exactly how many of you would prefer using something like
> that instead of the mailing-list or how many more people are out there
> who would benefit from such a platform.
> 
> But we're giving it a shot and will see how things work out over the
> next couple of months. If we see little interest, we'll just kill it off
> and revert to using just the lxc-users list. If we see it take off, we
> may start recommending it as the preferred place to get support and
> discuss LXC/LXD/LXCFS.
> 
> 
> The new site is at: https://discuss.linuxcontainers.org
> 
> 
> We support both Github login as well as standalone registration, so that
> should make it easy for anyone interested to be able to post questions
> and content.
> 
> The site is configured to self-moderate, so active users who post good
> content and help others will automatically get more privileges. That
> should let the community shape how this space works rather than have me
> and the core team babysit it :)
> 
> 
> Discourse (the engine we use for this) supports notifications by e-mail
> as well as responses and topic creation by e-mail. So for those of you
> who don't like dealing with web stuff, you can tweak the e-mail settings
> in your account and then interact with it almost entirely through
> e-mails.
> 
> Just a note on that bit, the plaintext version of those e-mails isn't so
> great right now, it's not properly wrapped, contains random spacing and
> the occasional html. I subscribed myself to receive all notifications
> and will try to tweak the discourse e-mail code for those of us who use
> mutt or other text-based clients.
> 
> 
> Anyway, please feel free to post your questions over there, share
> stories on what you're doing with LXC/LXD/LXCFS, ...
> 
> We just ask that bug reports remain on Github. If a support question
> turns out to be a bug, we'll file one for you on Github or ask for you
> to go file one there (similar to what we've been doing on this list).
> 
> 
> Hope this is a useful addition to our community!
> 
> Stéphane
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD firewall container?

2017-04-24 Thread Ron Kelley
Greetings all,

I am looking for an easy-to-configure firewall tool that provides 
NAT/Gateway/Firewall functions for other containers.  I know I can use 
iptables, etc, but I would like something more easily managed (web-based tool?) 
like pfSense, IPFire, IPCop, etc.  Unfortunately, many of the tools are ISO 
based which require “real” VM instances.  

I can’t seem to find any turn-key LXD firewall images; maybe I am looking in 
the wrong place?

Any pointers?

Thanks.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD 2.12 - VXLAN configuration connected to eth1

2017-04-23 Thread Ron Kelley
Thanks Stéphane.  Really appreciate the fast reply.  Will be looking forward to 
the next code drop.


As for the macvlan issue, turns out the interface was plumbed but never turned 
up.  After running “ifconfig vxlan1500 up”, I could get both containers pinging 
properly across the network.

For anyone else who might want to try VXLAN multicast between containers, here 
is a quick set of commands I used to get it working:

ip route -4 add 239.0.0.1 eth1
ip link add vxlan1500 type vxlan group 239.0.0.1 dev eth1 dstport 0 id 1500
ifconfig vxlan1500 up
 >


Simply replace the “vxlan1500” with your interface name of choice and pick you 
physical ethernet port number (eth1 in the example above).  The parameters “id 
1500” specify the VXLAN Network ID (0-16777215).

For what its worth, this is a huge win for me as I can setup a real environment 
using software-defined VLANs w/out modifying any top-of-rack switches.  I 
simply create a new VXLAN segment for each new customer on our LXD servers and 
deploy a software firewall that manages traffic between the VXLAN segment with 
a local gateway.

Awesome!


-Ron

 

> On Apr 23, 2017, at 5:00 PM, Stéphane Graber <stgra...@ubuntu.com> wrote:
> 
> Hi,
> 
> I sent a pull request which allows oevrriding the interface in multicast mode:
>https://github.com/lxc/lxd/pull/3210
> 
> When writing that code, I did notice that in my earlier implementation I
> always selected the default interface for those, so that explains why no
> amount of routing trickery would help.
> 
> Stéphane
> 
> On Sun, Apr 23, 2017 at 04:36:43PM -0400, Ron Kelley wrote:
>> Thanks for the speedy reply!  From my testing, the VXLAN tunnel always seems 
>> to use eth0.  After running the “ip -4 route add” command per your note 
>> below, I disabled eth1 on one of the hosts but was still able to ping 
>> between the two containers.  I re-enabled that interface and disabled eth0; 
>> the ping stopped.  It seems the VXLAN tunnel is bound to eth0.
>> 
>> By chance, is there a workaround to make this work properly?  I also tried 
>> using the macvlan interface type specifying a VXLAN tunnel interface and it 
>> would not work either.  For clarity, this is what I did:
>> 
>> ip link add vxlan500 type vxlan group 239.0.0.1 dev eth1 dstport 0 id 500
>> ip route -4 add 239.0.0.1 eth1
>> > to “vxlan500”>
>> 
>> I was hoping a raw VXLAN interface would work instead of using the LXD 
>> create command.
>> 
>> 
>> -Ron
>> 
>> 
>>> On Apr 23, 2017, at 4:18 PM, Stéphane Graber <stgra...@ubuntu.com> wrote:
>>> 
>>> Hi,
>>> 
>>> VXLAN in multicast mode (as is used in your case), when no multicast
>>> address is specified will be using 239.0.0.1.
>>> 
>>> This means that whatever route you have to reach "239.0.0.1" will be
>>> used by the kernel for the VXLAN tunnel, or so would I expect.
>>> 
>>> 
>>> Does:
>>> ip -4 route add 239.0.0.1 dev eth1
>>> 
>>> Cause the VXLAN traffic to now use eth1?
>>> 
>>> If it doesn't, then that'd suggest that the multicast VXLAN interface
>>> does in fact get tied to a particular parent interface and we should
>>> therefore add an option to LXD to let you choose that interface.
>>> 
>>> Stéphane
>>> 
>>> On Sun, Apr 23, 2017 at 04:04:03PM -0400, Ron Kelley wrote:
>>>> Greetings all.
>>>> 
>>>> Following Stéphane’s excellent guide on using multicast VXLAN with LXD 
>>>> (https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/).  In my 
>>>> lab, I have setup a few servers running Ubuntu 16.04 with LXD 2.12 and 
>>>> multiple interfaces (eth0, eth1, eth2).  My goal is to setup a 
>>>> multi-tenant computing solution using VXLAN to separate network traffic.  
>>>> I want to dedicate eth0 as the mgmt-only interface and use eth1 (or other 
>>>> additional interfaces) as customer-only interfaces. I have read a number 
>>>> of guides but can’t find anything that clearly spells out how to create 
>>>> bridged interfaces using eth1, eth2, etc for LXD.
>>>> 
>>>> I can get everything working using a single “eth0” interface on my LXD 
>>>> hosts using the following commands:
>>>> ---
>>>> lxc network create vxlan100 ipv4.address=none ipv6.address=none 
>>>> tunnel.vxlan100.protocol=vxlan tunnel.vxlan100.id=100
>>>> lxc launch ubuntu: testvm01
>>

[lxc-users] LXD 2.12 - VXLAN configuration connected to eth1

2017-04-23 Thread Ron Kelley
Greetings all.

Following Stéphane’s excellent guide on using multicast VXLAN with LXD 
(https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/).  In my lab, 
I have setup a few servers running Ubuntu 16.04 with LXD 2.12 and multiple 
interfaces (eth0, eth1, eth2).  My goal is to setup a multi-tenant computing 
solution using VXLAN to separate network traffic.  I want to dedicate eth0 as 
the mgmt-only interface and use eth1 (or other additional interfaces) as 
customer-only interfaces. I have read a number of guides but can’t find 
anything that clearly spells out how to create bridged interfaces using eth1, 
eth2, etc for LXD.

I can get everything working using a single “eth0” interface on my LXD hosts 
using the following commands:
---
lxc network create vxlan100 ipv4.address=none ipv6.address=none 
tunnel.vxlan100.protocol=vxlan tunnel.vxlan100.id=100
lxc launch ubuntu: testvm01
lxc network attach vxlan100 testvm01
---

All good so far.  I created two test containers running on separate LXD servers 
using the above VXLAN ID and gave each a static IP Address (i.e.: 10.1.1.1/24 
and 10.1.1.2/24).  Both can ping back and forth.  100% working.

The next step is to use eth1 instead of eth0 on my LXD servers,  but I can’t 
find a keyword in the online docs that specify which interface to bind 
(https://github.com/lxc/lxd/blob/master/doc/networks.md).

Any pointers/clues?

Thanks,

-Ron
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD (lxc 2.0.6 + lxd 2.0.8) - OOM problem

2017-01-17 Thread Ron Kelley
Follow-up.  Seems to be a bug with the kernel (4.4.0-59).  Heads-up to everyone…

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1655842 
<https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1655842>



On Jan 17, 2017, at 7:15 AM, Ron Kelley <rkelley...@gmail.com> wrote:

Greetings all,

Running Ubuntu 16.04 with 5G RAM, 20G SWAP, and LXD (LXC v.2.0.6 and LXD 
2.0.8).  We recently did a system update on our LXD servers and started getting 
a whole bunch of OOM messages from the containers.  Something like this:


--
Jan 17 06:20:54 LXD_Server_01 kernel: [259185.075154] mysqld invoked 
oom-killer: gfp_mask=0x26000c0, order=2, oom_score_adj=0 Jan 17 06:20:54 
LXD_Server_01 kernel: [259185.075158] mysqld cpuset=DB-Server3 mems_allowed=0 
Jan 17 06:20:54 LXD_Server_01 kernel: [259185.075166] CPU: 0 PID: 27649 Comm: 
mysqld Not tainted 4.4.0-59-generic #80-Ubuntu Jan 17 06:20:54 LXD_Server_01 
kernel: [259185.075167] Hardware name: VMware, Inc. VMware Virtual 
Platform/440BX Desktop Reference Platform, BIOS 6.00 04/14/2014
--



The container (www-somesitename-com) is using a custom profile like this:
--
name: Dual_Network_MySQL
config:
 limits.cpu: "2"
 limits.memory: 512MB
 limits.memory.swap: "true"
 raw.lxc: lxc.cgroup.memory.memsw.limit_in_bytes = 1300M
description: ""
devices:
 eth0:
   name: eth0
   nictype: macvlan
   parent: eth1.2005
   type: nic
 eth1:
   name: eth1
   nictype: macvlan
   parent: eth1.2006
   type: nic
--



The above profile should give the container 1.8GB of RAM (512RAM + 1.3G SWAP).  
If I look at the container stats, I don’t see where RAM+SWAP were exceeded:
--
Name: DB-Server3
Remote: unix:/var/lib/lxd/unix.socket
Architecture: x86_64
Created: 2016/10/17 06:47 UTC
Status: Running
Type: persistent
Profiles: Dual_Network_MySQL
Pid: 2215
Ips:
 eth0:  inet1.2.3.4
 eth0:  inet6   X
 eth1:  inet1.2.3.4
 eth1:  inet6   
 lo:inet127.0.0.1
 lo:inet6   ::1
Resources:
 Processes: 19
 Memory usage:
   Memory (current): 112.85MB
   Memory (peak): 271.26MB
   Swap (current): 12.23MB
   Swap (peak): 5.39MB
 Network usage:
   eth0:
 Bytes received: 4.17GB
 Bytes sent: 69.48GB
 Packets received: 25587831
 Packets sent: 31668639
   eth1:
 Bytes received: 1.53GB
 Bytes sent: 36.13GB
 Packets received: 9743914
 Packets sent: 14022159
   lo:
 Bytes received: 0 bytes
 Bytes sent: 0 bytes
 Packets received: 0
 Packets sent: 0
--

This happens on a variety of LXD servers (we have 5 running right now) and a 
variety of containers.  Running “free -m” on the container server shows plenty 
of RAM and SWAP available.  The only thing common is the OS running in the 
container (Ubuntu 16.04).  It seems our CentOS7 containers don’t have this 
issue.

Any clues/pointers?

Thanks.





___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD (lxc 2.0.6 + lxd 2.0.8) - OOM problem

2017-01-17 Thread Ron Kelley
Greetings all,

Running Ubuntu 16.04 with 5G RAM, 20G SWAP, and LXD (LXC v.2.0.6 and LXD 
2.0.8).  We recently did a system update on our LXD servers and started getting 
a whole bunch of OOM messages from the containers.  Something like this:


--
Jan 17 06:20:54 LXD_Server_01 kernel: [259185.075154] mysqld invoked 
oom-killer: gfp_mask=0x26000c0, order=2, oom_score_adj=0 Jan 17 06:20:54 
LXD_Server_01 kernel: [259185.075158] mysqld cpuset=DB-Server3 mems_allowed=0 
Jan 17 06:20:54 LXD_Server_01 kernel: [259185.075166] CPU: 0 PID: 27649 Comm: 
mysqld Not tainted 4.4.0-59-generic #80-Ubuntu Jan 17 06:20:54 LXD_Server_01 
kernel: [259185.075167] Hardware name: VMware, Inc. VMware Virtual 
Platform/440BX Desktop Reference Platform, BIOS 6.00 04/14/2014
--



The container (www-somesitename-com) is using a custom profile like this:
--
name: Dual_Network_MySQL
config:
  limits.cpu: "2"
  limits.memory: 512MB
  limits.memory.swap: "true"
  raw.lxc: lxc.cgroup.memory.memsw.limit_in_bytes = 1300M
description: ""
devices:
  eth0:
name: eth0
nictype: macvlan
parent: eth1.2005
type: nic
  eth1:
name: eth1
nictype: macvlan
parent: eth1.2006
type: nic
--



The above profile should give the container 1.8GB of RAM (512RAM + 1.3G SWAP).  
If I look at the container stats, I don’t see where RAM+SWAP were exceeded:
--
Name: DB-Server3
Remote: unix:/var/lib/lxd/unix.socket
Architecture: x86_64
Created: 2016/10/17 06:47 UTC
Status: Running
Type: persistent
Profiles: Dual_Network_MySQL
Pid: 2215
Ips:
  eth0: inet1.2.3.4
  eth0: inet6   X
  eth1: inet1.2.3.4
  eth1: inet6   
  lo:   inet127.0.0.1
  lo:   inet6   ::1
Resources:
  Processes: 19
  Memory usage:
Memory (current): 112.85MB
Memory (peak): 271.26MB
Swap (current): 12.23MB
Swap (peak): 5.39MB
  Network usage:
eth0:
  Bytes received: 4.17GB
  Bytes sent: 69.48GB
  Packets received: 25587831
  Packets sent: 31668639
eth1:
  Bytes received: 1.53GB
  Bytes sent: 36.13GB
  Packets received: 9743914
  Packets sent: 14022159
lo:
  Bytes received: 0 bytes
  Bytes sent: 0 bytes
  Packets received: 0
  Packets sent: 0
--

This happens on a variety of LXD servers (we have 5 running right now) and a 
variety of containers.  Running “free -m” on the container server shows plenty 
of RAM and SWAP available.  The only thing common is the OS running in the 
container (Ubuntu 16.04).  It seems our CentOS7 containers don’t have this 
issue.

Any clues/pointers?

Thanks.




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] would there be value in starting an LXD community online collection of how-to related information

2017-01-09 Thread Ron Kelley
Brian,

Absolutely agree on an online collection of how-to docs for LXD.  We started 
using LXD about 8mos ago as an alternative to full-blown VMs for hosting 
WordPress websites.  Since then, we are now have 4 main LXD containers hosting 
over 100 sites, and we are expanding every day.  Our goal is to move all our 
sites (and other admin VMs) over to LXD in the future.  

I think a wiki page of some sort to share the data would be a fantastic idea 
for the community.  In fact, I am spinning up my own personal blog in a few 
days and will have a bunch of notes on LXD.

Let me know what I can do to help or contribute.

-Ron



On Jan 8, 2017, at 2:51 PM, brian mullan  wrote:

I know there is the LXD github info that the developers provide and there are 
other awesome sources of info like Stephane Graber, Serge Hallyn, Tycho's etc 
websites on LXD.

But I've also seen a tremendous amount of LXD related "how-to's" scattered all 
over the web.and I've tried to collect what I personally found on the LXD 
subreddit:  https://www.reddit.com/r/LXD/ 

I see great questions & answers on the lxc-users mailer all the time but unless 
I cut & paste ones that are particularly insightful in order to save them for 
later I find it sometimes hard to "re-find" them later by going to the 
lxc-users mailer archive 
(https://lists.linuxcontainers.org/pipermail/lxc-users/ 
) as I haven't found a 
convenient way to "search" the entire archive for specific info other than 
expanding each month-by-month entry by subject although I do understand that 
there appears to be a way to make pipermail archives searchable - 
https://wiki.list.org/DOC/How%20do%20I%20make%20the%20archives%20searchable 


All too often I read comments by people along the lines of  "Why is there no 
general LXD Users Guide".

Those types of comments are are often accompanied by questions related to 
specific areas like how to map devices or other "how-to" configure type 
questions.

Myself, I'd started collecting tidbits into a .ODT file to reference when I 
need to and so far I have about 19 pages of info on various topics.I have 
to believe I'm not the only one that's got their own list of LXD how-to's saved 
away on their pc!

All of the info I've gathered was gleaned off of various sites on the web and 
unfortunately since I was, at the time only collecting the info for myself, I 
didn't keep a Link or author reference to where some of it came from (sorry 
about that).  Some of my .ODT probably came from things I'd read by the above 
individuals.

Attached is my .ODT.   The format may not be the greatest but again, it has 
been for my own reference so far.

I was thinking if something like this might be of interested to more people 
then we could host the file on github somewhere and the LXD user community 
could add/correct/delete info as time went on "how to" do things with LXD?

So thought I'd just throw this out there.

Brian
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Strange freezes with btrfs backend

2016-12-03 Thread Ron Kelley

My 0.02

We have been using btrfs in production for more than a year on other 
projects and about 6mos with LXD.  It has been rock solid.  I have multiple 
LXD servers each with >20 containers. We have a separate btrfs filesystem 
(with compression enabled) to store the LXD containers. I take nightly 
snapshots for all containers, and each server probably has 2000 snapshots. 
The only issue thus far is the IO hit when deleting lots of snapshots at 
one time.  You need to delete a few (10 at a time), pause for 60secs, then 
delete the next 10.


I have used ZFS in Linux in the past and could never get adequate 
performance - regardless of tuning or amount of RAM given to ZFS.  In fact, 
I started using ZFS for our backup server (64TB raw storage with 32GB RAM) 
but had to move back to XFS due to severe performance issues.  Nothing 
fancy; I did a by-the-bok install and enabled compression and snapshots. I 
tried every tuning option available (including SSD for L2-ARC). Nothing 
would improve the performance.


To the OP: are you sure btrfs is causing your issues?  Have you traced the 
OP activity during the hiccup moments?





On December 3, 2016 7:37:21 AM "Fajar A. Nugraha"  wrote:


On Sat, Dec 3, 2016 at 6:01 PM, Sergiusz Pawlowicz 
wrote:


> You'd need to set arc to be as small as possible:
> # cat /etc/modprobe.d/zfs-arc-max.conf
> options zfs zfs_arc_max=67108865

What is a sense of using ZFS if you don't use its cache? Non sense. it



- excellent integration with lxd
- data integrity verification using checksum
- thin-lvm-like space management
- send/receive
- compression
- much more mature compared to btrfs



will work slower and less reliable than ext4.



I never said it was faster.

In general, zfs WILL be slower - to some extent - compared to ext4. Just
like ext4 (presumably with LVM and raid/mirror) will be slower compared to
writing to raw disk directly - especially if you also exclude any kind of
raid/mirror and volume manager.

To give more perspective to my particular use case, my EC2 zfsroot AMI only
use 1GB EBS thanks to lz4 compression. And that's with around 400 MB free
space. Thanks to zfs snapshot/clone, I can also use clones of my root as
containers (which is more efficient compared to LVM snapshots or overlay)

Is it a suitable solution for everyone? No.
Does it work for my use case? Yes. MUCH more so compared to ext4 or btrfs.
Will it work for Pierce's use case? I believe so.

--
Fajar



--
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What are the options for connecting to storage?

2016-11-17 Thread Ron Kelley
Hi Ed,

Not sure how well that will work unless your unprivileged containers have the 
same UID/GID between your LXD servers.  Looking forward to your test results :-)


-Ron


On Nov 17, 2016, at 10:11 AM, McDonagh, Ed <ed.mcdon...@rmh.nhs.uk> wrote:

Thanks Ron. Thinking my requirements through again, I’ve decided to scrap the 
iSCSI mount and use NFS/CIFS instead. So following your pointer I have NFS 
mounted in the host, and then bind-mounted that to the container. I’ve yet to 
see how that migrates, but one bridge at a time!
 
Thanks again.
 
Kind regards
 
Ed
 
 
From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf 
Of Ron Kelley
Sent: 16 November 2016 19:49
To: LXC users mailing-list
Subject: Re: [lxc-users] What are the options for connecting to storage?
 
What about using a bind-mount?  Mount your iscsi volume to the LXD server and 
bind-mount it into the container.  A quick URL:  
https://github.com/lxc/lxd/issues/2005 <https://github.com/lxc/lxd/issues/2005>
 
 
 
 
 
 
On Nov 16, 2016, at 11:42 AM, McDonagh, Ed <ed.mcdon...@rmh.nhs.uk 
<mailto:ed.mcdon...@rmh.nhs.uk>> wrote:
 
Hi
 
I need to create a container that has access to a couple of TB to store image 
files (mostly 8MB upwards). My instinct is to create an iSCSI target on my 
Synology server and connect from the container to get a new disk that I can use 
for the storage.
 
I understand that the guest has to be privileged to do any sort of connection 
to storage, however it seems that the iSCSI node in the container doesn’t work.
 
What are my options? Do I need to use a SMB connection to the Synology server? 
Or is NFS better? Is there a way to connect using iSCSI?
 
If it makes any difference, I have two identical servers that I am running LXD 
on, and want to be able to move the containers between them for maintenance 
etc. Ideally live, but not essential. And that doesn’t seem to work anyway with 
lxd 2.0, so kind of a moot point!
 
Any help, suggestions or advice would be very welcome.
 
Kind regards
 
Ed
 
Ed McDonagh 
Head of Scientific Computing (Diagnostic Radiology)
Joint Department of Physics 
The Royal Marsden NHS Foundation Trust
Tel 020 7808 2512
Fax 020 7808 2522

 
 
Attention: 
This e-mail and any attachment is for authorised use by the intended 
recipient(s) only. It may contain proprietary, confidential and/or privileged 
information and should not be copied, disclosed, distributed, retained or used 
by any other party. If you are not an intended recipient please notify the 
sender immediately and delete this e-mail (including attachments and copies). 
The statements and opinions expressed in this e-mail are those of the author 
and do not necessarily reflect those of the Royal Marsden NHS Foundation Trust. 
The Trust does not take any responsibility for the statements and opinions of 
the author. 
Website: http://www.royalmarsden.nhs.uk <http://www.royalmarsden.nhs.uk/>
 
 
 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org <mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users 
<http://lists.linuxcontainers.org/listinfo/lxc-users>
 

Attention: 
This e-mail and any attachment is for authorised use by the intended 
recipient(s) only. It may contain proprietary, confidential and/or privileged 
information and should not be copied, disclosed, distributed, retained or used 
by any other party. If you are not an intended recipient please notify the 
sender immediately and delete this e-mail (including attachments and copies). 
The statements and opinions expressed in this e-mail are those of the author 
and do not necessarily reflect those of the Royal Marsden NHS Foundation Trust. 
The Trust does not take any responsibility for the statements and opinions of 
the author. 
Website: http://www.royalmarsden.nhs.uk <http://www.royalmarsden.nhs.uk/>



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org <mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users 
<http://lists.linuxcontainers.org/listinfo/lxc-users>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What are the options for connecting to storage?

2016-11-16 Thread Ron Kelley
What about using a bind-mount?  Mount your iscsi volume to the LXD server and 
bind-mount it into the container.  A quick URL:  
https://github.com/lxc/lxd/issues/2005 






On Nov 16, 2016, at 11:42 AM, McDonagh, Ed  wrote:

Hi
 
I need to create a container that has access to a couple of TB to store image 
files (mostly 8MB upwards). My instinct is to create an iSCSI target on my 
Synology server and connect from the container to get a new disk that I can use 
for the storage.
 
I understand that the guest has to be privileged to do any sort of connection 
to storage, however it seems that the iSCSI node in the container doesn’t work.
 
What are my options? Do I need to use a SMB connection to the Synology server? 
Or is NFS better? Is there a way to connect using iSCSI?
 
If it makes any difference, I have two identical servers that I am running LXD 
on, and want to be able to move the containers between them for maintenance 
etc. Ideally live, but not essential. And that doesn’t seem to work anyway with 
lxd 2.0, so kind of a moot point!
 
Any help, suggestions or advice would be very welcome.
 
Kind regards
 
Ed
 
Ed McDonagh 
Head of Scientific Computing (Diagnostic Radiology)
Joint Department of Physics 
The Royal Marsden NHS Foundation Trust
Tel 020 7808 2512
Fax 020 7808 2522

 

Attention: 
This e-mail and any attachment is for authorised use by the intended 
recipient(s) only. It may contain proprietary, confidential and/or privileged 
information and should not be copied, disclosed, distributed, retained or used 
by any other party. If you are not an intended recipient please notify the 
sender immediately and delete this e-mail (including attachments and copies). 
The statements and opinions expressed in this e-mail are those of the author 
and do not necessarily reflect those of the Royal Marsden NHS Foundation Trust. 
The Trust does not take any responsibility for the statements and opinions of 
the author. 
Website: http://www.royalmarsden.nhs.uk 



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-03 Thread Ron Kelley
Hi Benoit,

Our environment is pretty locked down when it comes to upgrades at the Ubuntu 
server level.  We don't upgrade often (mainly for security-related stuff).  
That said, in the event of a mandatory reboot, we take a VM snapshot then take 
a small downtime.  Since Ubuntu 16 (re)boots so quickly, the downtime is 
usually less than 30secs for our servers.  Thus, no extended outage times.  If 
the upgrade fails, we easily roll back to the snapshot.

At this time, NFSv3 is the best solution for us at this time.  Each NFS server 
has multiple NICs, redundant power supplies, etc (real enterprise-class 
systems).  In the event of an NFS server failure, we can reload from our backup 
servers (again, multiple backup servers, etc).

To address the single point of failure for NFS, we have been looking at 
something called ScaleIO.  It is a distributed/replicated block-level storage 
system much like gluster.  You create virtual LUNs and mount them on your 
hypervisor host; the hypervisor is responsible for managing the distributed 
access (think VMFS).  Each hypervisor sees the same LUN which makes VM 
migration simple.  This technology builds a Storage Area Network (SAN) over an 
IP network without expensive Fiber Channel infrastructure.  ScaleIO allows you 
to take multiple HDD failures or even complete storage node failures w/out 
downtime on your storage network.  The software is free for testing but you 
must purchase a support contract to use in production.  Just do a quick search 
for ScaleIO and read the literature.

Let me know if you have more questions...

Thanks,

-Ron




On 11/3/2016 11:38 AM, Benoit GEORGELIN - Association Web4all wrote:
> Hi Ron,
> sounds like a good way to manage it.  Thanks
> How do you handle your Ubuntu 16.04 upgrade / kernel update ? I case of
> a mandatory reboot, your LXD containers will have a downtime but maybe
> not a problem in your situation?
> 
> Regarding ceph, gluster and drdb, the main concern is about
> performance/stability so you are right, NFS could be the "best" way to
> share the data across hyperviseurs 
> 
> Cordialement,
> 
> Benoît 
> 
> ------------
> *De: *"Ron Kelley" <rkelley...@gmail.com>
> *À: *"lxc-users" <lxc-users@lists.linuxcontainers.org>
> *Envoyé: *Jeudi 3 Novembre 2016 10:53:05
> *Objet: *Re: [lxc-users] Question about your storage on multiple
> LXC/LXDnodes
> 
> We do it slightly differently.  We run LXD containers on Ubuntu 16.04
> Virtual Machines (inside a virtualized infrastructure).  Each physical
> server has redundant network links to highly-available storage.  Thus,
> we don't have to migrate containers between LXD servers; instead we
> migrate the Ubuntu VM to another server/storage pool.  Additionally, we
> use BTRFS snapshots inside the Ubuntu server to quickly restore backups
> for the LXD containers themselves.
> 
> So far, everything has been rock solid.  The LXD containers work great
> inside Ubuntu VMs (performance, scale, etc).  In the unlikely event we
> have to migrate an LXD container from one server to another, we will
> simply do an LXD copy (with a small maintenance window).
> 
> As an aside: I have tried gluster, ceph, and even DRBD in the past w/out
> much success.  Eventually, we went back to NFSv3 servers for
> performance/stability.  I am looking into setting up an HA NFSv4 config
> to address the single point of failure with NFS v3 setups.
> 
> -Ron
> 
> 
> 
> 
> On 11/3/2016 9:42 AM, Benoit GEORGELIN - Association Web4all wrote:
>> Thanks, looks like nobody use LXD in a cluster
>>
>> Cordialement,
>>
>> Benoît
>>
>> 
>> *De: *"Tomasz Chmielewski" <man...@wpkg.org>
>> *À: *"lxc-users" <lxc-users@lists.linuxcontainers.org>
>> *Cc: *"Benoit GEORGELIN - Association Web4all"
> <benoit.george...@web4all.fr>
>> *Envoyé: *Mercredi 2 Novembre 2016 12:01:50
>> *Objet: *Re: [lxc-users] Question about your storage on multiple LXC/LXD
>> nodes
>>
>> On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote:
>>> Hi,
>>>
>>> I'm wondering what kind of storage are you using in your
>>> infrastructure ?
>>> In a multiple LXC/LXD nodes how would you design the storage part to
>>> be redundant and give you the flexibility to start a container from
>>> any host available ?
>>>
>>> Let's say I have two (or more) LXC/LXD nodes and I want to be able to
>>> start the containers on one or the other node.
>>> LXD allow to move containers across nodes by transferring the

Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-03 Thread Ron Kelley
We do it slightly differently.  We run LXD containers on Ubuntu 16.04 Virtual 
Machines (inside a virtualized infrastructure).  Each physical server has 
redundant network links to highly-available storage.  Thus, we don't have to 
migrate containers between LXD servers; instead we migrate the Ubuntu VM to 
another server/storage pool.  Additionally, we use BTRFS snapshots inside the 
Ubuntu server to quickly restore backups for the LXD containers themselves.

So far, everything has been rock solid.  The LXD containers work great inside 
Ubuntu VMs (performance, scale, etc).  In the unlikely event we have to migrate 
an LXD container from one server to another, we will simply do an LXD copy 
(with a small maintenance window).

As an aside: I have tried gluster, ceph, and even DRBD in the past w/out much 
success.  Eventually, we went back to NFSv3 servers for performance/stability.  
I am looking into setting up an HA NFSv4 config to address the single point of 
failure with NFS v3 setups.

-Ron




On 11/3/2016 9:42 AM, Benoit GEORGELIN - Association Web4all wrote:
> Thanks, looks like nobody use LXD in a cluster 
> 
> Cordialement,
> 
> Benoît 
> 
> 
> *De: *"Tomasz Chmielewski" 
> *À: *"lxc-users" 
> *Cc: *"Benoit GEORGELIN - Association Web4all" 
> *Envoyé: *Mercredi 2 Novembre 2016 12:01:50
> *Objet: *Re: [lxc-users] Question about your storage on multiple LXC/LXD
> nodes
> 
> On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote:
>> Hi,
>>
>> I'm wondering what kind of storage are you using in your
>> infrastructure ?
>> In a multiple LXC/LXD nodes how would you design the storage part to
>> be redundant and give you the flexibility to start a container from
>> any host available ?
>>
>> Let's say I have two (or more) LXC/LXD nodes and I want to be able to
>> start the containers on one or the other node.
>> LXD allow to move containers across nodes by transferring the data
>> from node A to node B but I'm looking to be able to run the containers
>> on node B if node A is in maintenance or crashed.
>>
>> There is a lot of distributed file system (gluster, ceph, beegfs,
>> swift etc..)  but I my case, I like using ZFS with LXD and I would
>> like to try to keep that possibility .
> 
> If you want to stick with ZFS, then your only option is setting up DRBD.
> 
> 
> Tomasz Chmielewski
> https://lxadm.com
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not on others

2016-10-20 Thread Ron Kelley
hmmm, seems you are running the “original” version of lxc and not the new 
lxc/lxd software.  Please ignore my comments then…


On Oct 20, 2016, at 1:47 PM, Michael Peek <p...@nimbios.org> wrote:

# lxc profile show
The program 'lxc' is currently not installed. You can install it by typing:
apt install lxd-client

Maybe that's part of the problem?  Am I missing a package?  Here are the 
packages I have installed for lxc:

# dpkg -l | grep lxc | cut -c1-20
ii  liblxc1 
ii  lxc 
ii  lxc-common  
ii  lxc-templates   
ii  lxc1
ii  lxcfs   
ii  python3-lxc 

Michael

On 10/20/2016 01:43 PM, Ron Kelley wrote:
> "lxc profile show”.  Usually, you have a default profile that gets applied to 
> your container unless you have created a new/custom profile.
> 
> 
> 
> 
> 
> On Oct 20, 2016, at 1:41 PM, Michael Peek <p...@nimbios.org 
> <mailto:p...@nimbios.org>> wrote:
> 
> How do I tell?
> 
> Michael
> 
> 
> 
> On 10/20/2016 01:35 PM, Ron Kelley wrote:
>> What profile(s) are you using for your LXC containers?
>> 
>> 
>> 
>> On Oct 20, 2016, at 1:33 PM, Michael Peek <p...@nimbios.org 
>> <mailto:p...@nimbios.org>> wrote:
>> 
>> Hi gurus,
>> 
>> I'm scratching my head again.  I'm using the following commands to create an 
>> LXC container with a static IP address:
>> 
>> # lxc-create -n my-container-1 -t download -- -d ubuntu -r xenial -a amd64
>> 
>> # vi /var/lib/lxc/my-container-1/config
>> 
>> Change:
>> # Network configuration
>> # lxc.network.type = veth
>> # lxc.network.link = lxcbr0
>> # lxc.network.flags = up
>> # lxc.network.hwaddr = 00:16:3e:0d:ec:13
>> lxc.network.type = macvlan
>> lxc.network.link = eno1
>> 
>> # vi /var/lib/lxc/my-container-1/rootfs/etc/network/interfaces
>> 
>> Change:
>> #iface eth0 inet dhcp
>> iface eth0 inet static
>>   address xxx.xxx.xxx.4
>>   netmask 255.255.255.0
>>   network xxx.xxx.xxx.0
>>   broadcast xxx.xxx.xxx.255
>>   gateway xxx.xxx.xxx.1
>>   dns-nameservers xxx.xxx.0.66 xxx.xxx.128.66 8.8.8.8
>>   dns-search my.domain
>> 
>> # lxc-start -n my-container-1 -d
>> 
>> It failed to work.  I reviewed my notes from past posts to the list but 
>> found no discrepancies.  So I deleted the container and tried it on another 
>> host -- and it worked.  Next I deleted that container and went back to the 
>> first host, and it failed.  Lastly, I tried the above steps on multiple 
>> hosts and found that it works fine on some hosts, but not on others, and I 
>> have no idea why.  On hosts where this fails there are no error messages, 
>> but the container can't access the network, and nothing on the network can 
>> access the container.
>> 
>> Is there some step that I'm missing?
>> 
>> Thanks for any help,
>> 
>> Michael Peek
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org 
>> <mailto:lxc-users@lists.linuxcontainers.org>
>> http://lists.linuxcontainers.org/listinfo/lxc-users 
>> <http://lists.linuxcontainers.org/listinfo/lxc-users>
>> 
>> 
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org 
>> <mailto:lxc-users@lists.linuxcontainers.org>
>> http://lists.linuxcontainers.org/listinfo/lxc-users 
>> <http://lists.linuxcontainers.org/listinfo/lxc-users>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org 
> <mailto:lxc-users@lists.linuxcontainers.org>
> http://lists.linuxcontainers.org/listinfo/lxc-users 
> <http://lists.linuxcontainers.org/listinfo/lxc-users>
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org 
> <mailto:lxc-users@lists.linuxcontainers.org>
> http://lists.linuxcontainers.org/listinfo/lxc-users 
> <http://lists.linuxcontainers.org/listinfo/lxc-users>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not on others

2016-10-20 Thread Ron Kelley
What profile(s) are you using for your LXC containers?



On Oct 20, 2016, at 1:33 PM, Michael Peek  wrote:

Hi gurus,

I'm scratching my head again.  I'm using the following commands to create an 
LXC container with a static IP address:

# lxc-create -n my-container-1 -t download -- -d ubuntu -r xenial -a amd64

# vi /var/lib/lxc/my-container-1/config

Change:
# Network configuration
# lxc.network.type = veth
# lxc.network.link = lxcbr0
# lxc.network.flags = up
# lxc.network.hwaddr = 00:16:3e:0d:ec:13
lxc.network.type = macvlan
lxc.network.link = eno1

# vi /var/lib/lxc/my-container-1/rootfs/etc/network/interfaces

Change:
#iface eth0 inet dhcp
iface eth0 inet static
  address xxx.xxx.xxx.4
  netmask 255.255.255.0
  network xxx.xxx.xxx.0
  broadcast xxx.xxx.xxx.255
  gateway xxx.xxx.xxx.1
  dns-nameservers xxx.xxx.0.66 xxx.xxx.128.66 8.8.8.8
  dns-search my.domain

# lxc-start -n my-container-1 -d

It failed to work.  I reviewed my notes from past posts to the list but found 
no discrepancies.  So I deleted the container and tried it on another host -- 
and it worked.  Next I deleted that container and went back to the first host, 
and it failed.  Lastly, I tried the above steps on multiple hosts and found 
that it works fine on some hosts, but not on others, and I have no idea why.  
On hosts where this fails there are no error messages, but the container can't 
access the network, and nothing on the network can access the container.

Is there some step that I'm missing?

Thanks for any help,

Michael Peek
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD - auto mount home directories

2016-10-18 Thread Ron Kelley
Greetings all,

Looking over the email archives, I could not find a 100% answer.  I would like 
to setup LDAP with auto-mount home directories on our Ubuntu containers 
(servers running LXD 2.0.2 and beyond).

I found this thread:  https://github.com/lxc/lxd/issues/1826 but did not see a 
final answer.
I also found this: https://github.com/lxc/lxd/issues/2005

Just need final clarification.

Thanks.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LAN for LXD containers (with multiple LXD servers)?

2016-09-18 Thread Ron Kelley
So, just for clarity, you are saying each LXD server will have no separate 
network connection for the containers.  Thus, all containers are private to the 
LXD server, and any outbound traffic must traverse the container server 
interface.  Is this correct?  If so, sorry, I must have missed this requirement 
in your initial email.



On Sep 18, 2016, at 9:41 AM, Tomasz Chmielewski <man...@wpkg.org> wrote:

On 2016-09-18 22:14, Ron Kelley wrote:
> (Long reply follows…)
> Personally, I think you need to look at the big picture for such
> deployments.  From what I read below, you are asking, “how do I extend
> my layer-2 subnets between data centers such that container1 in Europe
> can talk with container6 in Asia, etc”.  If this is true, I think you
> need to look at deploying data center hardware (servers with multiple
> NICs, IPMI/DRAC/iLO interfaces) with proper L2/L3 routing (L2TP/IPSEC,
> etc).  And, you must look at how your failover services will work in
> this design.  It’s easy to get a couple of servers working with a
> simple design, but those simple designs tend to go to production very
> fast without proper testing and design.

Well, it's not only about deploying on "different continents".

It can be also in the same datacentre, where the hosting doesn't give you a LAN 
option.

For example - Amazon AWS, same region, same availability zone.

The servers will have "private" addresses like 10.x.x.x, traffic there will be 
private to your servers, but there will be no LAN. You can't assign your own 
LAN addresses (10.x.x.x).

This means, while you can launch several LXD containers on every of these 
servers - but their "LAN" will be limited per each LXD server (unless we do 
some special tricks).

Some other hostings offer a public IP, or several public IPs per servers, in 
the same datacentre, but again, no LAN.


Tomasz Chmielewski
https://lxadm.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LAN for LXD containers (with multiple LXD servers)?

2016-09-18 Thread Ron Kelley
(Long reply follows…)

Personally, I think you need to look at the big picture for such deployments.  
From what I read below, you are asking, “how do I extend my layer-2 subnets 
between data centers such that container1 in Europe can talk with container6 in 
Asia, etc”.  If this is true, I think you need to look at deploying data center 
hardware (servers with multiple NICs, IPMI/DRAC/iLO interfaces) with proper 
L2/L3 routing (L2TP/IPSEC, etc).  And, you must look at how your failover 
services will work in this design.  It’s easy to get a couple of servers 
working with a simple design, but those simple designs tend to go to production 
very fast without proper testing and design.


All that said, here is one way I would tackle this type of request:

* Get servers with at least 3 NICs (preferably 5)
  * One iLO/DRAC/IPMI interface for out-of-band management
  * One for Container server management (ie: LXD1 IP 1.2.3.4) - use a second 
NIC for redundancy in a bonded configuration
  * One for Container hosting network (ie container1, container2, etc) - use a 
second NIC for redundancy and VLANs to separate traffic

* Get firewalls in each location with L2TP/IPSEC support (pfSense works great)
  * Extend your L2 networks between your sites with L2TP
  * Secure the connection with IPSSEC

* On your LXD servers, create 2 bonded NICs
  * One for container management (eth0, eth1)
  * One for hosting network (eth2, eth3)
  * Use VLANs on hosting network to separate traffic
  * Configure your containers with the appropriate VLAN tag (ie: 501)

Once the above is done, your containers can talk w/each other in different 
locations.  You can use firewall rules to allow/deny IP connections from your 
container VMs.  You can extend both your container management and hosting 
networks across the L2 tunnel allowing you to move VMs at will.  


General Notes:
---
* For server bonded connections, I use linux mode type 6;  works well, provides 
great throughput, requires no special configuration on directly-connected 
switches.
* On the LXD side, create multiple profiles with VLAN configurations.  
Personally, I have 2 profiles: one for VLAN 501 and one for VLAN 502.  Local 
firewall provides security between container networks.
* Be mindful of the services you share across the tunnels.  Things like iSCSI, 
NFS, etc will kill your network performance because of the chatty type of 
traffic.


Some good references:
---
https://doc.pfsense.org/index.php/L2TP/IPsec
http://archive.openflow.org/wk/index.php/Tunneling_-_GRE/L2TP
http://www.networkworld.com/article/2163334/tech-primers/what-can-l2tp-do-for-your-network-.html

Caution: L2 networks have a lot of broadcast traffic.  If your site-to-site 
connections are slow, your entire extended L2 network will suffer.  Must find a 
way to suppress L2 broadcast/multicast between sites.


Hope this helps.  Happy to share my LXD configurations with anyone...

-Ron






On Sep 18, 2016, at 5:16 AM, Tomasz Chmielewski  wrote:

It's easy to create a "LAN" for LXD containers on a single LXD server - just 
attach them to the same bridge, use the same subnet (i.e. 10.10.10.0/24) - 
done. Containers can communicate with each other using their private IP address.

However, with more then one LXD server *not* in the same LAN (i.e. two LXD 
servers in different datacentres), the things get tricky.


Is anyone using such setups, with multiple LXD servers and containers being 
able to communicate with each other?


LXD1: IP 1.2.3.4, EuropeLXD2: IP 2.3.4.5, Asia
container1, 10.10.10.10 container4, 10.10.10.20
container2, 10.10.10.11 container5, 10.10.10.21
container3, 10.10.10.12 container6, 10.10.10.22


LXD3: IP 3.4.5.6, US
container7, 10.10.10.30
container8, 10.10.10.31
container8, 10.10.10.32


While I can imagine setting up many OpenVPN tunnels between all LXD servers 
(LXD1-LXD2, LXD1-LXD3, LXD2-LXD3) and constantly adjusting the routes as 
containers are stopped/started/migrated, it's a bit of a management nightmare. 
And even more so if the number of LXD servers grows.

Hints, discussion?


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Desktop Environment in LXD

2016-06-18 Thread Ron Kelley
Perhaps your best option is to open a support ticket with Canonical?  I am sure 
someone (Stephen, etc) would be happy to help you over the phone.



On Jun 18, 2016, at 12:05 AM, Rahul Rawail  wrote:

Thanks for your answer, but again can someone who knows inside out of this, 
help us understand the options and concept over the phone please.

On Sat, Jun 18, 2016 at 1:04 PM, Saint Michael > wrote:
I did this long ago but only using XVNC on the containers. It works, but 
performance is bad, since you have many Xservers and many Xvnc servers.
I don't think you can share the same graphics hardware from multiple 
containers. That would be possible only a very powerful card made by Nvidia, 
designed specifically to have 3D computing on virtual machines.


On Fri, Jun 17, 2016 at 9:32 PM, Rahul Rawail > wrote:
Thanks Fajar for your answers, I still have some questions, please help:

> Thanks Simos for your answer, Just few questions and they may be dumb
> questions, if LXD is running on top of a host OS and host machine has
> graphic card I thought that it will be able to give it a call  and I
> understand that since LXD still uses core functions of host OS hence if I
> will create 100 containers then all of them will have access to all the host
> hardware including video and audio.


Yes, and no. Depending on what you want, and how you setup the containers.

One way to do it is to give containers access to the hardware
directly, often meaning that only one of them can use the hardware at
the same time.


> The reason I asked this question was to understand that in LXD 
presentation they said that LXD has the capability to replace all existing VM's 
as they run complete OS but if you can't put a DE on it then its not of much 
use.
Sorry for this but when you said Yes and No what do you mean, I guess Yes means 
as you explained "to give containers access to the hardware directly, often 
meaning that only one of them can use the hardware at the same time." I 
understand that but like VM do we have the capability to put the drivers again 
in container or have virtual drivers so that all containers can use hardware in 
parallel rather than only one using it at any one point of time.If they are 
replacement for VM then they should work like VM, am i wrong with my 
expectation?


>
> I have tried  https://www.stgraber.org/2014/02/09/lxc-1-0-gui-in-containers/ 
> 


That's another way to do it: give containers access to host resources
(e.g. X, audio) via unix sockets by setting bind mounts manually

-> what will happen in this case, will they all work in parallel and 
have access to all the hardware of host machine at the same time??? 


> Also, I thought that container should be able to use host os X server and
> there should not be any need of another X server.

correct for the second way.

---> I am assuming all containers have access to same X server in parallel 
and the hardware in parallel.



> Can the container desktop environment be called from another remote machine
> for remote access.


... and that's the third way. Treat containers like another headless
server, and setup remote GUI access to it appropriately. My favorite
is xrdp, but vnc or x2go should work as well (I haven't tested sound
though, didn't need it)

Note that if you need copy/paste and file transfer support in xrdp,
your containers need to be privileged with access to /dev/fuse
enabled. If you don't need that feature, the default unpriv container
is fine.

-> I have not tried this but I am assuming if I am not able to get 
the DE up in container due to "xf86OpenConsole: Cannot open /dev/tty0 (no such 
file found) " error then xrdp or x2go are just going to show me the terminal on 
the client side and not desktops, am I right with my assumption?

All I want to do in the first stage is bring up an LXD container and then bring 
up a new window on the current desktop like any other VM and have another 
desktop environment in it for container and I should be able to do this for 
every container and hence have multiple desktops on my current desktop with all 
having access to host hardware in parallel, at the same time maintaining the 
same bare metal performance without adding any performance overhead. Will xrdp 
or x2go still reap the same benefit as LXD or add performance overhead. Next 
stage is then I want to take it to remote client which you already explained 
before, is there any other option, why I am asking this because I read 
somewhere that LXD by default have the ability to connect to another LXD or LXD 
server.

One last request to you or to anyone, if possible can someone please give their 
half an hour to one hour over the phone (we will call) just to help us out 
please, we have been struggling for weeks and asking for help 

[lxc-users] LXD - bind mount inside container

2016-06-14 Thread Ron Kelley
Greetings,

Looking to setup a bind mount inside a CentOS-6 container for ~user-a/WWW 
pointing to /var/www/html.  However, each time I run “bind —mount 
/home/user-a/www /var/www/html” I get a read-only error message and the bind 
mount is not created.  This works just fine inside a “normal” VM.

Any pointers?

Thanks.

-Ron
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Proper usage of fuidshift

2016-05-16 Thread Ron Kelley
Trying to understand the right way to use fuidshift.  I have rsync'd a 
container from one server to another and the root/group IDs are off.  Each time 
I start the container, I get permission denied errors (like root's .bashrc).  I 
read the manpage for fuidshift but am still confused.  Various incantations 
don't appear to put the right permissions on the container's files/directories. 
 Can someone please give some guidance?  

/etc/subgid output:

rkelley:10:65536
lxd:165536:65536
root:165536:65536
wpadmin:231072:65536


Container rsync'd from another server:
-
root@hj-wp-container-mgmt-01:/var/lib/lxd/containers/CentOS7-PHP56-Baseline-Current#
 ls -la
total 4
drwxr-xr-x+  3 root   root 19 May 16 10:29 .
drwx--x--x   4 root   root131 May 16 10:33 ..
dr-xr-xr-x  19 10 10 4096 May  5 17:48 rootfs


New container on server:
---
root@hj-wp-container-mgmt-01:/var/lib/lxd/containers/test-container# ls -al
total 8
drwxr-xr-x+  4 165536 165536   55 May 16 11:52 .
drwx--x--x   5 root   root152 May 16 11:52 ..
dr-xr-xr-x  18 165536 165536 4096 May 16 11:52 rootfs


From what I can see, the CentOS7-PHP56-Baseline-Current container should have 
root/group IDs of 165536/165536 but it has 10/10 instead.

My question is: how can I get the CentOS7-PHP56-Baseline-Current container to 
get the correct permissions using fuidshift?

Thanks.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] zfs disk usage for published lxd images

2016-05-16 Thread Ron Kelley
Thanks for that.  Honestly, the only issue I have seen thus far is the out of 
space issue due to Metadata.  This seems to happen frequently on our backup 
servers (55TB with thousands of snapshots).  We have plenty of disk space 
available, but the Metata space is always >80% full no matter how much I try to 
remove/clean.  Looks like I need to upgrade to the latest 4.6 mainline kernel 
and see what happens.  

From my experience, btrfs is much better than zfs for the features I need 
(snapshots, compression, dedup).  My systems don't slow down and don't require 
require nearly as much RAM.


On 5/16/2016 7:46 AM, Tomasz Chmielewski wrote:
> I've been using btrfs quite a lot and it's great technology. There are
> some shortcomings though:
> 
> 1) compression only really works with compress-force mount argument
> 
> On a system which only stores text logs (receiving remote rsyslog logs),
> I was gaining around 10% with compress=zlib mount argument - not that
> great for text files/logs. With compress-force=zlib, I'm getting over
> 85% compress ratio (i.e. using just 165 GB of disk space to store 1.2 TB
> data). Maybe that's the consequence of receiving log streams, not sure
> (but, compress-force fixed bad compression ratio).
> 
> 
> 2) the first kernel where I'm not getting out-of-space issues is 4.6
> (which was released yesterday). If you're using a distribution kernel,
> you will probably be seeing out-of-space issues. Quite likely scenario
> to hit out-of-space with a kernel lower than 4.6 is to use a database
> (postgresql, mongo etc.) and to snapshot the volume. Ubuntu users can
> download kernel packages from
> http://kernel.ubuntu.com/~kernel-ppa/mainline/
> 
> 
> 3) had some really bad experiences with btrfs quotas stability in older
> kernels, and judging by the amount of changes in this area on
> linux-btrfs mailing list, I'd rather wait a few stable kernels than use
> it again
> 
> 
> 4) if you use databases - you should chattr +C database dir, otherwise,
> performance will suffer. Please remember that chattr +C does not have
> any effect on existing files, so you might need to stop your database,
> copy the files out, chattr +C the database dir, copy the files in
> 
> 
> Other than that - works fine, snapshots are very useful.
> 
> It's hard to me to say what's "more stable" on Linux (btrfs or zfs); my
> bets would be btrfs getting more attention in the coming year, as it's
> getting its remaining bugs fixed.
> 
> 
> Tomasz Chmielewski
> http://wpkg.org
> 
> 
> 
> 
> On 2016-05-16 20:20, Ron Kelley wrote:
>> I tried ZFS on various linux/FreeBSD builds in the past and the
>> performance was aweful.  It simply required too much RAM to perform
>> properly.  This is why I went the BTRFS route.
>>
>> Maybe I should look at ZFS again on Ubuntu 16.04...
>>
>>
>>
>> On 5/16/2016 6:59 AM, Fajar A. Nugraha wrote:
>>> On Mon, May 16, 2016 at 5:38 PM, Ron Kelley <rkelley...@gmail.com>
>>> wrote:
>>>> For what's worth, I use BTRFS, and it works great.
>>>
>>> Btrfs also works in nested lxd, so if that's your primary use I highly
>>> recommend btrfs.
>>>
>>> Of course, you could also keep using zfs-backed containers, but
>>> manually assign a zvol-formatted-as-btrfs for first-level-container's
>>> /var/lib/lxd.
>>>
>>>>  Container copies are almost instant.  I can use compression with
>>>> minimal overhead,
>>>
>>> zfs and btrfs are almost identical in that aspect (snapshot/clone, and
>>> lz4 vs lzop in compression time and ratio). However, lz4 (used in zfs)
>>> is MUCH faster at decompression compared to lzop (used in btrfs),
>>> while lzop uses less memory.
>>>
>>>> use quotas to limit container disk space,
>>>
>>> zfs does that too
>>>
>>>> and can schedule a deduplication task via cron to save even more space.
>>>
>>> That is, indeed, only available in btrfs
>>>
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
> 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] zfs disk usage for published lxd images

2016-05-16 Thread Ron Kelley
I tried ZFS on various linux/FreeBSD builds in the past and the 
performance was aweful.  It simply required too much RAM to perform 
properly.  This is why I went the BTRFS route.


Maybe I should look at ZFS again on Ubuntu 16.04...



On 5/16/2016 6:59 AM, Fajar A. Nugraha wrote:

On Mon, May 16, 2016 at 5:38 PM, Ron Kelley <rkelley...@gmail.com> wrote:

For what's worth, I use BTRFS, and it works great.


Btrfs also works in nested lxd, so if that's your primary use I highly
recommend btrfs.

Of course, you could also keep using zfs-backed containers, but
manually assign a zvol-formatted-as-btrfs for first-level-container's
/var/lib/lxd.


 Container copies are almost instant.  I can use compression with minimal 
overhead,


zfs and btrfs are almost identical in that aspect (snapshot/clone, and
lz4 vs lzop in compression time and ratio). However, lz4 (used in zfs)
is MUCH faster at decompression compared to lzop (used in btrfs),
while lzop uses less memory.


use quotas to limit container disk space,


zfs does that too


and can schedule a deduplication task via cron to save even more space.


That is, indeed, only available in btrfs


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] zfs disk usage for published lxd images

2016-05-16 Thread Ron Kelley
For what's worth, I use BTRFS, and it works great.  Container copies are almost 
instant.  I can use compression with minimal overhead, use quotas to limit 
container disk space, and can schedule a deduplication task via cron to save 
even more space.

Might be something for you to check out.

Thanks,

-Ron

On May 16, 2016, at 6:34 AM, Brian Candler  wrote:

> On 16/05/2016 10:55, Fajar A. Nugraha wrote:
>> Are you using the published images on the same lxd instance?
> Yes.
>> If so, you can use "lxc copy" on a powered-off container. It should 
>> correctly use zfs clone. You can also copy-a-copy.
> 
> That was the clue I was looking for. Thank you!
> 
> Regards,
> 
> Brian.
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Container scaling - LXD 2.0

2016-05-09 Thread Ron Kelley
Thanks Fajar,

Appreciate the pointers.  We have already setup MariaDB with the small-instance 
tuning as well as setup php-fpm using the on-demand option as well.  The big 
issue now is RAM.

A brief background:
-
A few years back, one of our customers asked us to host a small website for 
them.  As word got out, we starting hosting a few more.  Fast forward a few 
years and we are now hosting > 1300 sites.  We are currently running monolithic 
VMs (2vCPUs 2G RAM) that host about 60-70 sites each, and we are looking to 
move away from these huge VMs to something more scalable and secure like LXC.  
The downside to this approach is the extra RAM overhead since each container 
will run its own copy of nginx/php-fpm/mariadb (for ease of portability).  

After doing some research, it seems KSM is enabled in the Ubuntu 16 kernel but 
is disabled by default.  I will be running some tests over the next few days to 
see if KSM can provide any benefit.  As for the 5G RAM question; our proposed 
model is to run a large VM instance (5-8G RAM, 4-6vCPUs) to host the same (or 
more) sites via LXC containers.  We are looking to protect each site from 
another as well as provide more fine-tuned system resources per site (limit 
RAM/CPU per site).  This is our main driver behind LXC.


Thanks again for the info.

-Ron



On 5/9/2016 12:48 AM, Fajar A. Nugraha wrote:
> On Mon, May 9, 2016 at 7:18 AM, Ronald Kelley  wrote:
>> Greetings all,
>>
>> I am trying to get some data points on how many containers we can run on a 
>> single host if all the containers run the same applications (eg: Wordpress 
>> w/nginx, php-fpm, mysql).  We have a number of LXD 2.0 servers running on 
>> Ubuntu 16.04 - each server has 5G RAM, 20G Swap, and 4 CPUs.
> 
> When you use lxd you can already "overprovision" (as in, the sum of
> "limits.memory" on all running containers can be MUCH greater than
> total memory you have). See
> https://insights.ubuntu.com/2015/06/11/how-many-containers-can-you-run-on-your-machine/
> for example.
> 
> I can say that swapping will -- most of the time -- kill performance.
> Big time. Often to the point that it'd be hard to even ssh into the
> server to "fix" things. Which is why most of my servers are now
> swapless. YMMV though.
> 
> Do some experiments, monitor your swap activity (e.g. use "vmstat" to
> monitor swap in and swap out), and determine whether swap actually
> helps you, or cause more harm than good.
> 
> Also, what's the story with the 5G RAM? Even my NUCs has 32GB RAM nowadays.
> 
>> I have read about Kernel Samepage Memory (KSM), and it seems to be included 
>> in the Ubuntu 16.40 kernel.  So, in theory, we can over provision our 
>> containers by using KSM.
>>
>>
>> Any pointers?
> 
> I'd actually suggest "try other methods first". For example:
> - you can easily save some memory from php-fpm by using "pm =
> ondemand" and a small number in "pm.max_children" (e.g. 2).
> - use shared mysql instace when possible. If not, use smaller memory
> settings, e.g. 
> http://www.tocker.ca/2014/03/10/configuring-mysql-to-use-minimal-memory.html
> 
> This entry from openvz should be relevant if you still want to use KSM
> for generic applications running inside a container:
> https://openvz.org/KSM_(kernel_same-page_merging)#Enabling_memory_deduplication_in_applications
> 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] How to limit rootfs in a container

2016-05-05 Thread Ron Kelley
Greetings all,

From the LXD documentation URL 
(https://github.com/lxc/lxd/blob/master/doc/configuration.md) it appears we can 
limit the size of the rootfs inside the container via the “size” parameter.  
However, I can’t seem to find the proper syntax to make this happen.  I need 
the syntax for the profile as well as the “lxc config set” command.

Thanks for any pointers.

-Ron
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Copying/cloning a container between nodes if LXC/LXD is not running

2016-05-03 Thread Ron Kelley
Greetings all,

I updated some packages on my Ubuntu 15.10 server today which (when rebooted) 
caused the bridged networking to no longer work.  As a result, the LXD daemon 
would not start, in turn, preventing me from spinning up some containers.  
After fixing the networking issue, I realized I don’t have an emergency 
plan/process to recover a container on Node-B if Node-A fails.

I am looking for a way to rsync container-1 on Node-A to Node-B if/when LXC/LXD 
is not running on Node-A.  I realized the UID/GID values are different between 
the nodes, thus I need a way to sync the rootfs between nodes and remap the 
UID/GID properly.

Any pointers?

Thanks.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users