Re: [lxc-users] Snashot left behind after deleting container?

2019-09-23 Thread Lai Wei-Hwa
No. I'm the sysadmin who put LXD in place. I'm asking: 


1. why is this snapshot left behind when I deleted the container? 
2. how should I properly remove this snapshot? 
3. why are there what appear to be snapshots that I did not create? 

Thanks! 
Lai 


From: "Fajar A. Nugraha"  
To: "lxc-users"  
Sent: Thursday, September 19, 2019 2:31:37 AM 
Subject: Re: [lxc-users] Snashot left behind after deleting container? 

On Wed, Sep 18, 2019 at 10:15 PM Lai Wei-Hwa < [ mailto:wh...@robco.com | 
wh...@robco.com ] > wrote: 



I don't see it listed when using: lxc storage volume list 

But it does appear to be a snapshot. How was this generated? Why is it there? I 
see others that have similar naming conventions (ending in a number string that 
I didn't create). 




So you're asking "what is on my system" to strangers on the internet, rather 
than asking your (previous) sysadmin? Right ... 


BQ_BEGIN

How do I properly remove this without causing any unintended consequences? 


BQ_END

Quick google search points to 
[ https://ubuntu.com/blog/lxd-2-0-your-first-lxd-container | 
https://ubuntu.com/blog/lxd-2-0-your-first-lxd-container ] (look for "Snapshot 
management") 
[ https://discuss.linuxcontainers.org/t/lxd-3-8-has-been-released/3450 | 
https://discuss.linuxcontainers.org/t/lxd-3-8-has-been-released/3450 ] 
[ https://discuss.linuxcontainers.org/t/lxd-3-11-has-been-released/4245 | 
https://discuss.linuxcontainers.org/t/lxd-3-11-has-been-released/4245 ] 

Newer lxd versions put snapshots under a diffierent directory (e.g. 
/var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/), so my 
guess is you're probably running an old version (2.x?). 

Also, deleting containers should also delete its snapshot (tested on lxd 3.17). 
There is "snapshots.pattern" container configuration, but it won't help much 
since in your case you already deleted the container. So (again) my guess is 
the snapshot is something created by an additional tool (external to lxd. A 
manual or scheduled btrfs snapshot, perhaps?). In which case your (former) 
sysadmin should know more. 

-- 
Fajar 

___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Snashot left behind after deleting container?

2019-09-18 Thread Lai Wei-Hwa
I don't see it listed when using: lxc storage volume list 

But it does appear to be a snapshot. How was this generated? Why is it there? I 
see others that have similar naming conventions (ending in a number string that 
I didn't create). How do I properly remove this without causing any unintended 
consequences? 

Thanks! 
Lai 


From: "Lai Wei-Hwa"  
To: "lxc-users"  
Sent: Tuesday, September 17, 2019 1:41:31 PM 
Subject: [lxc-users] Snashot left behind after deleting container? 

I had a container: zimbra-backup 
I deleted this container. 
But I see this: 

root@R510-LXD6-Backup:/storage/lxd/common/lxd/storage-pools/Pool-1/containers# 
ll 
total 0 
drwx--x--x 1 root root 750 Sep 17 11:39 ./ 
drwx--x--x 1 root root 84 Oct 14 2018 ../ 
drwx-- 1 root root 30 Nov 26 2018 zimbra-backup575925076/ 

I'm not sure if I created this snapshot or not. The naming convention makes me 
think I didn't do this. Regardless, why is it still there and what is the right 
way to remove it? 

Best Regards, 

Lai Wei-Hwa 
IT Administrator 
T. (514) 367-2252 ext 6308 C. (514) 218-7400 [ mailto:wh...@robco.com | 
wh...@robco.com ] 
Montreal - Toronto - Edmonton [ https://www.robco.com/ | www.robco.com ] 

ISO 9001 / 14001 


___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Snashot left behind after deleting container?

2019-09-17 Thread Lai Wei-Hwa
I had a container: zimbra-backup 
I deleted this container. 
But I see this: 

root@R510-LXD6-Backup:/storage/lxd/common/lxd/storage-pools/Pool-1/containers# 
ll 
total 0 
drwx--x--x 1 root root 750 Sep 17 11:39 ./ 
drwx--x--x 1 root root 84 Oct 14 2018 ../ 
drwx-- 1 root root 30 Nov 26 2018 zimbra-backup575925076/ 

I'm not sure if I created this snapshot or not. The naming convention makes me 
think I didn't do this. Regardless, why is it still there and what is the right 
way to remove it? 

Best Regards, 

Lai Wei-Hwa 
IT Administrator 
T. (514) 367-2252 ext 6308 C. (514) 218-7400 [ mailto:wh...@robco.com | 
wh...@robco.com ] 
Montreal - Toronto - Edmonton [ https://www.robco.com/ | www.robco.com ] 

ISO 9001 / 14001 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Issue with networking when bridging to vlan interface

2019-03-19 Thread Lai Wei-Hwa
Hi Everyone, 

Here's my scenario. My host has 4 nics which are bonded (bond0). I want to have 
different containers using different vlans. Here is my host interfaces file: 


source /etc/network/interfaces.d/* 

## 
# Loopback interface # 
## 

auto lo 
iface lo inet loopback 

### 
# Physical Interfaces # 
### 

auto eno1 
iface eno1 inet manual 
bond-master bond0 
bond-primary eno1 

auto eno2 
iface eno2 inet manual 
bond-master bond0 

auto eno3 
iface eno3 inet manual 
bond-master bond0 

auto eno4 
iface eno4 inet manual 
bond-master bond0 


## 
# BONDED NETWORK DEVICES # 
## 

auto bond0 
iface bond0 inet manual 
bond-mode 4 
bond-slaves none 
bond-miimon 100 
bond-lacp-rate 1 
bond-slaves eno1 eno2 eno3 eno4 
bond-downdelay 200 
bond-updelay 200 
bond-xmit-hash-policy layer2+3 


 
# RAW VLAN DEVICES # 
 

iface bond0.100 inet static 
vlan-raw-device bond0 

iface bond0.101 inet static 
vlan-raw-device bond0 

iface bond0.102 inet static 
vlan-raw-device bond0 

iface bond0.103 inet static 
vlan-raw-device bond0 


## 
# BRIDGE NETWORK DEVICES # 
## 

auto br0 
iface br0 inet static 
bridge_ports bond0 
bridge_maxwait 10 
address 10.9.0.188 
netmask 255.255.0.0 
broadcast 10.9.255.255 
network 10.9.0.0 
gateway 10.9.0.1 
dns-nameservers 10.1.1.84 8.8.8.8 

auto br0-100 
iface br0-100 inet manual 
bridge_ports bond0.100 
bridge_stp off 
bridge_fd 0 
bridge_maxwait 0 

auto br0-101 
iface br0-101 inet manual 
bridge_ports bond0.101 
bridge_stp off 
bridge_fd 0 
bridge_maxwait 0 

auto br0-102 
iface br0-102 inet manual 
bridge_ports bond0.102 
bridge_stp off 
bridge_fd 0 
bridge_maxwait 0 

auto br0-103 
iface br0-103 inet manual 
bridge_ports bond0.103 
bridge_stp off 
bridge_fd 0 
bridge_maxwait 0 

I have create a profile for a container to use vlan100 (br0-100) 

config: {} 
description: "" 
devices: 
eth0: 
nictype: bridged 
parent: br0-100 
type: nic 
root: 
path: / 
pool: default 
type: disk 
name: vlantest 
used_by: 
- /1.0/containers/v100 

The interface file in my container: 

source /etc/network/interfaces.d/*.cfg 

# The loopback network interface 
auto lo 
iface lo inet loopback 

# The primary network interface 
auto eth0 
iface eth0 inet static 
address 10.9.100.101 
netmask 255.255.255.0 
network 10.9.100.0 
gateway 10.9.100.1 
broadcast 10.9.100.255 
dns-nameservers 10.1.1.84 

ifconfig in container: 

eth0 Link encap:Ethernet HWaddr 00:16:3e:40:16:9c 
inet6 addr: fe80::216:3eff:fe40:169c/64 Scope:Link 
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 
RX packets:12 errors:0 dropped:0 overruns:0 frame:0 
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 
collisions:0 txqueuelen:1000 
RX bytes:1016 (1.0 KB) TX bytes:1192 (1.1 KB) 

eth1 Link encap:Ethernet HWaddr 00:16:3e:42:12:66 
inet6 addr: fe80::216:3eff:fe42:1266/64 Scope:Link 
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 
RX packets:14 errors:0 dropped:0 overruns:0 frame:0 
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 
collisions:0 txqueuelen:1000 
RX bytes:1700 (1.7 KB) TX bytes:508 (508.0 B) 

lo Link encap:Local Loopback 
inet addr:127.0.0.1 Mask:255.0.0.0 
inet6 addr: ::1/128 Scope:Host 
UP LOOPBACK RUNNING MTU:65536 Metric:1 
RX packets:0 errors:0 dropped:0 overruns:0 frame:0 
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 
collisions:0 txqueuelen:1 
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 

What am I doing wrong here? Container creates an additional interface for some 
reason (eth1) but neither get addresses. 

Best Regards, 

Lai Wei-Hwa 
IT Administrator 
T. (514) 367-2252 ext 6308 C. (514) 218-7400 [ mailto:wh...@robco.com | 
wh...@robco.com ] 
Montreal - Toronto - Edmonton [ https://www.robco.com/ | www.robco.com ] 

ISO 9001 / 14001 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Enabling fanotify in LXD container

2018-06-18 Thread Lai Wei-Hwa
I have fanotify enabled on my 16.04.4 host: 

root@R510-LXD5-SMB:~# cat /boot/config-4.4.0-128-generic |grep -i fanotify 
CONFIG_FANOTIFY=y 
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y 

In my container, boot only containers grub so I'm not sure how to check. But 
I'm pretty sure it's not available (fanotify_init failed) . Can someone 
explain: 


1. how to enable fanotify on container 
2. how to verify that fanotify is running 

Best Regards, 

Lai Wei-Hwa 
IT Administrator 
T. (514) 367-2252 ext 6308 C. (514) 218-7400 [ mailto:wh...@robco.com | 
wh...@robco.com ] 
Montreal - Toronto - Edmonton [ https://www.robco.com/ | www.robco.com ] 

ISO 9001 / 14001 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Hotplugging devices (USB Z-Wave controller) with path value

2018-04-20 Thread Lai Wei-Hwa
One last question:

On a host, the device would be owned by root:dialout
In a guest, the device is root:root

In the HASS usecase, one would normally add the hass user to dialout. In the 
comtainer, we wouldn't want to add hass to the root group. While chowning the 
device to hass:hass works, a reboot reverts the ownership to root. Is there an 
LXD/LXC way to control permissions on a passed through device? I know I can 
mitigate it in some other ways, but just wondering if there's an LXD way to do 
this. 

Thanks! 
Lai

- Original Message -
From: "Stéphane Graber" <stgra...@ubuntu.com>
To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
Sent: Friday, April 20, 2018 12:54:31 PM
Subject: Re: [lxc-users] Hotplugging devices (USB Z-Wave controller) with path  
value

On Wed, Apr 18, 2018 at 03:41:04PM -0400, Lai Wei-Hwa wrote:
> To add another thing, even though I've removed the device (lxc config device 
> remove hass Z-Wave), it is still seen in the container: 
> 
> lai@hass:~$ lsusb 
> Bus 002 Device 003: ID 0624:0249 Avocent Corp. Virtual Keyboard/Mouse 
> Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub 
> Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
> Bus 005 Device 002: ID 0624:0248 Avocent Corp. Virtual Hub 
> Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
> Bus 001 Device 002: ID 0424:2514 Standard Microsystems Corp. USB 2.0 Hub 
> Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub 
> Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
> Bus 003 Device 002: ID 0658:0200 Sigma Designs, Inc. 
> Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
> 
> Thanks! 
> Lai 
> 
> 
> 
> I was trying to setup hotplugging a USB into an LXD container. I need the 
> path, in the container, to be /dev/ttyACM0. How can I do this? 
> 
> lai@host:~$ lxc config device add hass Z-Wave unix-char vendorid=0658 
> productid=0200 path=/dev/ttyACM0 
> Error: Invalid device configuration key for unix-char: productid 

lxc config device add hass Z-Wave unix-char path=/dev/ttyACM0

If you have multiple devices that may end up at that path, you should
instead use something like:

lxc config device add hass Z-Wave unix-char 
source=/dev/serial/by-id/usb-FTDI_FT232R_USB_UART_A60337Y1-if00-port0 
path=/dev/ttyACM0

This will ensure that the device at /dev/ttyACM0 in the container is always the 
same one.


I do this in my openhab container here where I have the following devices:
  usb-alarm:
gid: "111"
path: /dev/ttyUSB1
source: /dev/serial/by-id/usb-FTDI_FT230X_Basic_UART_DQ00AXEP-if00-port0
type: unix-char
uid: "0"
  usb-insteon:
gid: "111"
path: /dev/ttyUSB0
source: /dev/serial/by-id/usb-FTDI_FT232R_USB_UART_A60337Y1-if00-port0
type: unix-char
uid: "0"
  usb-z-wave:
gid: "111"
path: /dev/ttyACM0
source: /dev/serial/by-id/usb-0658_0200-if00
type: unix-char
uid: "0"


> 
> lai@host:~$ lxc config device add hass Z-Wave usb vendorid=0658 
> productid=0200 path=/dev/ttyACM0 
> Error: Invalid device configuration key for usb: path 
> 
> 
> ___ 
> lxc-users mailing list 
> lxc-users@lists.linuxcontainers.org 
> http://lists.linuxcontainers.org/listinfo/lxc-users 

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Hotplugging devices (USB Z-Wave controller) with path value

2018-04-18 Thread Lai Wei-Hwa
To add another thing, even though I've removed the device (lxc config device 
remove hass Z-Wave), it is still seen in the container: 

lai@hass:~$ lsusb 
Bus 002 Device 003: ID 0624:0249 Avocent Corp. Virtual Keyboard/Mouse 
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub 
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
Bus 005 Device 002: ID 0624:0248 Avocent Corp. Virtual Hub 
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
Bus 001 Device 002: ID 0424:2514 Standard Microsystems Corp. USB 2.0 Hub 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub 
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
Bus 003 Device 002: ID 0658:0200 Sigma Designs, Inc. 
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 

Thanks! 
Lai 



I was trying to setup hotplugging a USB into an LXD container. I need the path, 
in the container, to be /dev/ttyACM0. How can I do this? 

lai@host:~$ lxc config device add hass Z-Wave unix-char vendorid=0658 
productid=0200 path=/dev/ttyACM0 
Error: Invalid device configuration key for unix-char: productid 

lai@host:~$ lxc config device add hass Z-Wave usb vendorid=0658 productid=0200 
path=/dev/ttyACM0 
Error: Invalid device configuration key for usb: path 


___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Hotplugging devices (USB Z-Wave controller) with path value

2018-04-18 Thread Lai Wei-Hwa
Hey Everyone, 

I was trying to setup hotplugging a USB into an LXD container. I need the path, 
in the container, to be /dev/ttyACM0. How can I do this? 

lai@host:~$ lxc config device add hass Z-Wave unix-char vendorid=0658 
productid=0200 path=/dev/ttyACM0 
Error: Invalid device configuration key for unix-char: productid 

lai@host:~$ lxc config device add hass Z-Wave usb vendorid=0658 productid=0200 
path=/dev/ttyACM0 
Error: Invalid device configuration key for usb: path 

I can pass it though without using the path but then I don't know where it 
mounts (though, even if I did, I need it to mount where I've indicated). 


Best Regards, 

Lai Wei-Hwa 
IT Administrator 
T. (514) 367-2252 ext 6308 C. (514) 218-7400 [ mailto:wh...@robco.com | 
wh...@robco.com ] 
Montreal - Toronto - Edmonton [ https://www.robco.com/ | www.robco.com ] 

ISO 9001 / 14001 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC copy snapshots only to remote?

2018-03-12 Thread Lai Wei-Hwa
Hi Fajar, 

Yes, I'm aware of other alternatives. My question is geared towards how LXD/LXC 
works with snapshots. 

Thanks! 
Lai 


From: "Fajar A. Nugraha" <l...@fajar.net> 
To: "lxc-users" <lxc-users@lists.linuxcontainers.org> 
Sent: Wednesday, March 7, 2018 2:40:59 PM 
Subject: Re: [lxc-users] LXC copy snapshots only to remote? 

On Thu, Mar 8, 2018 at 1:49 AM, Lai Wei-Hwa < [ mailto:wh...@robco.com | 
wh...@robco.com ] > wrote: 



Thanks Fajar, 

I'm more interested in if I'm right or wrong and why that's the case. 

Incremental snapshot support is in LXD 3.0 but I'm asking in relation to LXC, 
not LXD. And I'm really looking to clear up my (mis)understanding. 





Ah, I must not have the most recent info then. 

Regardles when (or if) the devs decided to implement it in lxc (even support in 
lxd seems not completed yet: [ https://github.com/lxc/lxd/issues/3326 | 
https://github.com/lxc/lxd/issues/3326 ] ), you could always perform 
storage-level backup yourself. Particularly handy if you have lots of 
containers, use zfs, and use recursive incremental send. 

-- 
Fajar 

___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC copy snapshots only to remote?

2018-03-07 Thread Lai Wei-Hwa
Thanks Fajar, 

I'm more interested in if I'm right or wrong and why that's the case. 

Incremental snapshot support is in LXD 3.0 but I'm asking in relation to LXC, 
not LXD. And I'm really looking to clear up my (mis)understanding. 


Thanks! 
Lai 


From: "Fajar A. Nugraha" <l...@fajar.net> 
To: "lxc-users" <lxc-users@lists.linuxcontainers.org> 
Sent: Wednesday, March 7, 2018 1:17:07 PM 
Subject: Re: [lxc-users] LXC copy snapshots only to remote? 

On Thu, Mar 8, 2018 at 12:03 AM, Lai Wei-Hwa < [ mailto:wh...@robco.com | 
wh...@robco.com ] > wrote: 



Hi Everyone, 

I'm probably not fully grasping how LXC containers/snapshotting works, but why 
isn't the following possible? 

HostContainer 
Monday 
Tuesday 
Wed 
Thurs 

H1 
C1 (fresh Ubuntu) 
SA (added apache) 
SB (removed apache and added nginx) 
SC (hardened nginx config) 
Host dies 

H2 
C2 (created from C1 Snapshot B) 


SC (with hardened nginx config) 


On Tuesday, I snapshot C1 (creating SB) and stop the container. I then jump on 
a new host (H2) and copy snapshot B: 

BQ_BEGIN

H2$ lxc copy H1:C1/SB C2 



At this point, my C2 is equivalent to C1 + SA + SB. Thus, I believe that C1 + 
SA + SB = C2 

On Wednesday, I take Snapshot C on H1. 

I believe that on Wednesday, after taking SC, I should be able to copy SC alone 
to H2. And then on Thursday, when H1 dies, I should be able to go to H2 and 
launch SC (C2 + SC) and have the same container I had on H! when I first took 
Snapshot C. 

If I'm wrong, why am I wrong? If I'm right, how do I copy SC by itself (and not 
the whole container) to H2 on Wednesday? 



BQ_END


I'm pretty sure lxc doesn't do incremental snapshots. 

To get what you want, you need to manage the storage snapshots (and incremental 
send) yourself. For example, if using zfs, you can use sanoid + syncoid. 

-- 
Fajar 

___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC copy snapshots only to remote?

2018-03-07 Thread Lai Wei-Hwa
Hi Everyone, 

I'm probably not fully grasping how LXC containers/snapshotting works, but why 
isn't the following possible? 

HostContainer 
Monday 
Tuesday 
Wed 
Thurs 

H1 
C1 (fresh Ubuntu) 
SA (added apache) 
SB (removed apache and added nginx) 
SC (hardened nginx config) 
Host dies 

H2 
C2 (created from C1 Snapshot B) 


SC (with hardened nginx config) 


On Tuesday, I snapshot C1 (creating SB) and stop the container. I then jump on 
a new host (H2) and copy snapshot B: 



H2$ lxc copy H1:C1/SB C2 



At this point, my C2 is equivalent to C1 + SA + SB. Thus, I believe that C1 + 
SA + SB = C2 

On Wednesday, I take Snapshot C on H1. 

I believe that on Wednesday, after taking SC, I should be able to copy SC alone 
to H2. And then on Thursday, when H1 dies, I should be able to go to H2 and 
launch SC (C2 + SC) and have the same container I had on H! when I first took 
Snapshot C. 

If I'm wrong, why am I wrong? If I'm right, how do I copy SC by itself (and not 
the whole container) to H2 on Wednesday? 




Best Regards, 

Lai Wei-Hwa 
IT Administrator 
T. (514) 367-2252 ext 6308 C. (514) 218-7400 [ mailto:wh...@robco.com | 
wh...@robco.com ] 
Montreal - Toronto - Edmonton [ https://www.robco.com/ | www.robco.com ] 

ISO 9001 / 14001 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] spacewalk in lxc?

2017-12-19 Thread Lai Wei-Hwa

I've done it with LXD/LXC and no issues thus far though I have not added any clients yet.On Dec 19, 2017 1:45 PM, jjs - mainphrame  wrote:Greetings,Has anyone had any success setting up a spacewalk server in an lxc container? I'd like to set up spacewalk for my centos hosts, and would prefer to run it in a container. I suspect it could be made to work, but curious if any special workarounds are needed.Jake___lxc-users mailing listlxc-users@lists.linuxcontainers.orghttp://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Problem with a docker pull & deleting btrfs subvolumes in a container

2017-12-07 Thread Lai Wei-Hwa
I've figured out how to remove the subvolumes: 
root@R510-LXD4-DMZ:~# btrfs subvolume list /storage 
ID 257 gen 83 top level 5 path lxd/common/lxd/storage-pools/Pool-1 
ID 258 gen 165 top level 257 path 
lxd/common/lxd/storage-pools/Pool-1/containers 
ID 259 gen 81 top level 257 path lxd/common/lxd/storage-pools/Pool-1/snapshots 
ID 260 gen 165 top level 257 path lxd/common/lxd/storage-pools/Pool-1/images 
ID 261 gen 83 top level 257 path lxd/common/lxd/storage-pools/Pool-1/custom 
ID 264 gen 165 top level 260 path 
lxd/common/lxd/storage-pools/Pool-1/images/ca0613ce3f58111feb09a4775216fad94046360a00586738f8f80c2fb20b5bc7
 
ID 265 gen 4527 top level 258 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud 
ID 497 gen 4285 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/3398c42dda6ee295d52d593a0e5334dd51a172826e5881ec9dff84bc37315c8d
 
ID 498 gen 4285 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/b9677d2c91cd0ccc00f8b557b56c103390c777b3e2574aee3a35595224665d75
 
ID 499 gen 4285 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/b62a1c869c9e9b7dbc8693b7b97c14cb7ccf09b3bf52e0d6a7656cc3bb50c312
 
ID 500 gen 4285 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/5a7b74cc5a0e14d161d5ea942001b83812021260722e64da27cb315a52d6fa70
 
ID 501 gen 4285 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/37b4ed38bb85b4140ab416ff74317968e4e18fd2ffadb74236cfb4b4254f785b
 
ID 502 gen 4285 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/76ecc26b9fd080267c4e937f80b0993ce0e637847240f36dd3c434eae01fbb37
 
ID 503 gen 4285 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/44474f3511f76cda0cb0f4227c59981e8a830fc90f17a2e9376e0f6b35842590
 
ID 505 gen 4285 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/e3db5dcb5a3e25caae66eed98976d46ec46f17a0a3e3e94e0985635fc614ec54
 
ID 508 gen 4285 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/6b359e6c6a5714c0e227a444601c7d4c4ab281079ebecf712b772c553e52f65f
 
ID 511 gen 4285 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/77e3c37a5af37ff8e66fe58349afac618882ac684de3231ffe12a163e9c7211f
 
ID 513 gen 4508 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/97c0031787223910a30eaf207b51f49bf29636b78c91b4ff863ade3b2712ebf2
 
ID 514 gen 4508 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/5a5c9a877cb45167eafadc4d495d2d6f426fbcc3ca621ca8486ab89380aa7abb
 
ID 515 gen 4508 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/6da41daadc16030e047af3a9a568266a29ee6ea109df16361b998f59c7da0d4f
 
ID 516 gen 4508 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/2dea1f6a2751b66441e632e60e6b5ca56a4cdb02bcf85e1d7685ccd30e10d266
 
ID 517 gen 4508 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/ffc00c903c2df58df0c7fad14dfe15436410170647da8117d0f098ab9897afc5
 
ID 518 gen 4508 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/0a59b1c82fe9afa60af6ec774f1563b85cc8badd4020422c6985eed6df02f4a3
 
ID 519 gen 4508 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/05e65121f1be1ff23ed768f287a2c98520c48bc245c46f1f334be0e8ee8950a4
 
ID 520 gen 4508 top level 265 path 
lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/e72e4f68d906b2cd8713241dc752f733f79dae88939d8248946234ce180d
 

root@R510-LXD4-DMZ:~# btrfs subvolume delete 
/storage/lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/3398c42dda6ee295d52d593a0e5334dd51a172826e5881ec9dff84bc37315c8d
 
Delete subvolume (no-commit): 
'/storage/lxd/common/lxd/storage-pools/Pool-1/containers/nextcloud/rootfs/var/lib/docker/btrfs/subvolumes/3398c42dda6ee295d52d593a0e5334dd51a172826e5881ec9dff84bc37315c8d'
 

I still haven't figured out how to properly run docker in LXD/LXC. I've [ 
https://stackoverflow.com/posts/25885682/revisions | seen this ] but I'm not 
sure how to properly/securely do this with LXD. 

Thanks! 
Lai 


From: "Lai Wei-Hwa" <wh

[lxc-users] Problem with a docker pull & deleting btrfs subvolumes in a container

2017-12-07 Thread Lai Wei-Hwa
Howdy all, 

Host: Xenial 16.04 
LXD: Snappy 2.20 

When I try to pull and extract code for a docker container, I get: 
root@nextcloud:~# sudo docker pull collabora/code 
Using default tag: latest 
latest: Pulling from collabora/code 
bd97b43c27e3: Pull complete 
6960dc1aba18: Pull complete 
2b61829b0db5: Pull complete 
1f88dc826b14: Pull complete 
73b3859b1e43: Pull complete 
0f0e2d01915e: Pull complete 
2458b914d686: Pull complete 
a3e2abb56fa4: Extracting [==>] 
972.1 MB/972.1 MB 
failed to register layer: ApplyLayer exit status 1 stdout: stderr: operation 
not permitted 

Additionally, when I went to delete the btrfs subvolumes created by docker 
(maybe related to [ 
https://lists.linuxcontainers.org/pipermail/lxc-users/2015-February/008494.html 
| this issue ] ?): 
root@nextcloud:/var/lib/docker/btrfs/subvolumes# rm -rf * 
rm: cannot remove 
'3398c42dda6ee295d52d593a0e5334dd51a172826e5881ec9dff84bc37315c8d': Operation 
not permitted 
rm: cannot remove 
'37b4ed38bb85b4140ab416ff74317968e4e18fd2ffadb74236cfb4b4254f785b': Operation 
not permitted 
rm: cannot remove 
'44474f3511f76cda0cb0f4227c59981e8a830fc90f17a2e9376e0f6b35842590': Operation 
not permitted 
rm: cannot remove 
'5a7b74cc5a0e14d161d5ea942001b83812021260722e64da27cb315a52d6fa70': Operation 
not permitted 
rm: cannot remove 
'6b359e6c6a5714c0e227a444601c7d4c4ab281079ebecf712b772c553e52f65f': Operation 
not permitted 
rm: cannot remove 
'76ecc26b9fd080267c4e937f80b0993ce0e637847240f36dd3c434eae01fbb37': Operation 
not permitted 
rm: cannot remove 
'77e3c37a5af37ff8e66fe58349afac618882ac684de3231ffe12a163e9c7211f': Operation 
not permitted 
rm: cannot remove 
'b62a1c869c9e9b7dbc8693b7b97c14cb7ccf09b3bf52e0d6a7656cc3bb50c312': Operation 
not permitted 
rm: cannot remove 
'b9677d2c91cd0ccc00f8b557b56c103390c777b3e2574aee3a35595224665d75': Operation 
not permitted 
rm: cannot remove 
'e3db5dcb5a3e25caae66eed98976d46ec46f17a0a3e3e94e0985635fc614ec54': Operation 
not permitted 

Which I'm assuming is the same issue causing: 
r oot@nextcloud:/var/lib/docker/btrfs/subvolumes# sudo apt remove --purge 
docker.io 
Reading package lists... Done 
Building dependency tree 
Reading state information... Done 
The following packages were automatically installed and are no longer required: 
at-spi2-core bridge-utils cgroupfs-mount containerd gconf-service 
gconf-service-backend gconf2 gconf2-common libatk-bridge2.0-0 
libatk-wrapper-java libatk-wrapper-java-jni libatspi2.0-0 libavahi-glib1 
libbonobo2-0 libbonobo2-common libcanberra0 libfontenc1 libgconf-2-4 
libgnome-2-0 libgnome2-common libgnomevfs2-0 libgnomevfs2-common libice6 
liborbit-2-0 libsm6 libtdb1 libvorbisfile3 libxaw7 
libxcb-shape0 libxft2 libxmu6 libxt6 libxv1 libxxf86dga1 runc 
sound-theme-freedesktop ubuntu-fan x11-utils 
Use 'sudo apt autoremove' to remove them. 
The following packages will be REMOVED: 
docker.io* 
0 upgraded, 0 newly installed, 1 to remove and 11 not upgraded. 
After this operation, 62.7 MB disk space will be freed. 
Do you want to continue? [Y/n] y 
(Reading database ... 34706 files and directories currently installed.) 
Removing docker.io (1.13.1-0ubuntu1~16.04.2) ... 
'/usr/share/docker.io/contrib/nuke-graph-directory.sh' -> 
'/var/lib/docker/nuke-graph-directory.sh' 
Purging configuration files for docker.io (1.13.1-0ubuntu1~16.04.2) ... 

Nuking /var/lib/docker ... 
(if this is wrong, press Ctrl+C NOW!) 

+ sleep 10 

+ umount -f /var/lib/docker/btrfs 
umount: /var/lib/docker/btrfs: block devices are not permitted on filesystem 
dpkg: error processing package docker.io (--purge): 
subprocess installed post-removal script returned error exit status 32 
Processing triggers for man-db (2.7.5-1) ... 
Errors were encountered while processing: 
docker.io 
E: Sub-process /usr/bin/dpkg returned an error code (1) 

Question 1: 
How can I get past the docker pull issue? 

Question 2: 
Does the btrfs subvolume issue mean that any subvolume in a container can't be 
removed? If it can, what's the proper way to remove them? 

Best Regards, 

Lai Wei-Hwa 
IT Administrator 
T. (514) 367-2252 ext 6308 C. (514) 218-7400 [ mailto:wh...@robco.com | 
wh...@robco.com ] 
Montreal - Toronto - Edmonton [ https://www.robco.com/ | www.robco.com ] 

ISO 9001 / 14001 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Image Handling

2017-11-28 Thread Lai Wei-Hwa
Per Stephane's [ https://stgraber.org/2016/03/30/lxd-2-0-image-management-512/ 
| blog post ] I would expect that ... 

lxc image copy remote:image local: --auto-update 

... should be sufficient for the job. But the docs say we need a recorded 
source. Should I assume that the source is recorded with the above command? 

Thanks! 
Lai 


From: "Lai Wei-Hwa" <wh...@robco.com> 
To: "lxc-users" <lxc-users@lists.linuxcontainers.org> 
Sent: Monday, November 27, 2017 4:03:35 PM 
Subject: LXD Image Handling 

>From [ https://lxd.readthedocs.io/en/latest/image-handling/ | the docs ] : 

On startup and then every 6 hours (unless images.auto_update_interval is set), 
the LXD daemon will go look for more recent version of all the images in the 
store which are marked as auto-update and have a recorded source server. 

How do we ensure that our images are auto-updated and have the source server 
recorded? 


1. Can we do this when copying an image from a remote server? 
2. Can we do this when deploying a local container based on an image from a 
remote server? 

I'm running the snap pack 2.20 

Best Regards, 

Lai Wei-Hwa 
IT Administrator 
T. (514) 367-2252 ext 6308 C. (514) 218-7400 [ mailto:wh...@robco.com | 
wh...@robco.com ] 
Montreal - Toronto - Edmonton [ https://www.robco.com/ | www.robco.com ] 

ISO 9001 / 14001 


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Snap 2.20 - Default Text Editor

2017-11-28 Thread Lai Wei-Hwa
Or perhaps you can consider providing plug interfaces for other editors 
(editors would need to be snaps as well and provide slot interfaces)?

Thanks! 
Lai

- Original Message -
From: "Lai Wei-Hwa" <wh...@robco.com>
To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
Sent: Monday, November 27, 2017 1:06:56 PM
Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor

Thanks for the clarification, Stephane. 

I was actually unaware that snaps couldn't interact with binaries outside of 
the ones it owns. Makes sense and adds some security. That being said, are 
there any plans to add nano and/or vim to the snap package? While it would make 
for a larger snap package, a lot of our breed are quite particular about their 
editors and I can see this being an annoyance for many. 

Thanks! 
Lai

- Original Message -
From: "Stéphane Graber" <stgra...@ubuntu.com>
To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
Sent: Friday, November 24, 2017 2:44:27 AM
Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor

Because of the way the snap works, it doesn't actually have access to
your system's filesystem and so can't run commands from your system.

Only binaries that are part of the snap or the minimal copy of Ubuntu
that the snap uses can be used by it.

We ship a copy of vi in our snap to make the editor function work,
that's why you get vi regardless of what EDITOR/VISUAL points to in your
environment.


You can workaround this by using something like:

lxc config show NAME > out.yaml
 out.yaml
lxc config edit NAME < out.yaml

The same is possible for all the other objects which come with an "edit"
command.

On Wed, Nov 22, 2017 at 09:57:11AM -0500, Lai Wei-Hwa wrote:
> Ron, 
> 
> You're telling me the normal way to set your default editor. I know how to do 
> this. The problem is that my default editor is nano, which I want, but when I:
> $ lxc profile edit default 
> It still opens in VI. This is an LXC issue. 
> 
> Thanks! 
> Lai
> 
> - Original Message -
> From: "Ron Kelley" <rkelley...@gmail.com>
> To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
> Sent: Tuesday, November 21, 2017 10:07:32 PM
> Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor
> 
> sudo update-alternatives --config editor
> 
> http://vim.wikia.com/wiki/Set_Vim_as_your_default_editor_for_Unix
> 
> 
> 
> 
> 
> > On Nov 21, 2017, at 7:49 PM, Lai Wei-Hwa <wh...@robco.com> wrote:
> > 
> > Thanks, but that's the problem, it's still opening in VI
> > 
> > Thanks! 
> > Lai
> > 
> > - Original Message -
> > From: "Björn Fischer" <b...@cebitec.uni-bielefeld.de>
> > To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
> > Sent: Tuesday, November 21, 2017 7:46:58 PM
> > Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor
> > 
> > Hi,
> > 
> >> $ lxc profile edit default
> >> Opens in VI even though my editor is nano (save the flaming)
> >> 
> >> How can we edit the default editor?
> > 
> > $ EDITOR=nano
> > $ export EDITOR
> > 
> > Cheers,
> > 
> > Björn
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD Image Handling

2017-11-27 Thread Lai Wei-Hwa
>From [ https://lxd.readthedocs.io/en/latest/image-handling/ | the docs ] : 

On startup and then every 6 hours (unless images.auto_update_interval is set), 
the LXD daemon will go look for more recent version of all the images in the 
store which are marked as auto-update and have a recorded source server. 

How do we ensure that our images are auto-updated and have the source server 
recorded? 


1. Can we do this when copying an image from a remote server? 
2. Can we do this when deploying a local container based on an image from a 
remote server? 

I'm running the snap pack 2.20 

Best Regards, 

Lai Wei-Hwa 
IT Administrator 
T. (514) 367-2252 ext 6308 C. (514) 218-7400 [ mailto:wh...@robco.com | 
wh...@robco.com ] 
Montreal - Toronto - Edmonton [ https://www.robco.com/ | www.robco.com ] 

ISO 9001 / 14001 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Snap 2.20 - Default Text Editor

2017-11-27 Thread Lai Wei-Hwa
Thanks for the clarification, Stephane. 

I was actually unaware that snaps couldn't interact with binaries outside of 
the ones it owns. Makes sense and adds some security. That being said, are 
there any plans to add nano and/or vim to the snap package? While it would make 
for a larger snap package, a lot of our breed are quite particular about their 
editors and I can see this being an annoyance for many. 

Thanks! 
Lai

- Original Message -
From: "Stéphane Graber" <stgra...@ubuntu.com>
To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
Sent: Friday, November 24, 2017 2:44:27 AM
Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor

Because of the way the snap works, it doesn't actually have access to
your system's filesystem and so can't run commands from your system.

Only binaries that are part of the snap or the minimal copy of Ubuntu
that the snap uses can be used by it.

We ship a copy of vi in our snap to make the editor function work,
that's why you get vi regardless of what EDITOR/VISUAL points to in your
environment.


You can workaround this by using something like:

lxc config show NAME > out.yaml
 out.yaml
lxc config edit NAME < out.yaml

The same is possible for all the other objects which come with an "edit"
command.

On Wed, Nov 22, 2017 at 09:57:11AM -0500, Lai Wei-Hwa wrote:
> Ron, 
> 
> You're telling me the normal way to set your default editor. I know how to do 
> this. The problem is that my default editor is nano, which I want, but when I:
> $ lxc profile edit default 
> It still opens in VI. This is an LXC issue. 
> 
> Thanks! 
> Lai
> 
> - Original Message -
> From: "Ron Kelley" <rkelley...@gmail.com>
> To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
> Sent: Tuesday, November 21, 2017 10:07:32 PM
> Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor
> 
> sudo update-alternatives --config editor
> 
> http://vim.wikia.com/wiki/Set_Vim_as_your_default_editor_for_Unix
> 
> 
> 
> 
> 
> > On Nov 21, 2017, at 7:49 PM, Lai Wei-Hwa <wh...@robco.com> wrote:
> > 
> > Thanks, but that's the problem, it's still opening in VI
> > 
> > Thanks! 
> > Lai
> > 
> > - Original Message -
> > From: "Björn Fischer" <b...@cebitec.uni-bielefeld.de>
> > To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
> > Sent: Tuesday, November 21, 2017 7:46:58 PM
> > Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor
> > 
> > Hi,
> > 
> >> $ lxc profile edit default
> >> Opens in VI even though my editor is nano (save the flaming)
> >> 
> >> How can we edit the default editor?
> > 
> > $ EDITOR=nano
> > $ export EDITOR
> > 
> > Cheers,
> > 
> > Björn
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Bonding inside container? Or any other ideas?

2017-11-22 Thread Lai Wei-Hwa
Hi Andrey, 

Are you trying to use bond0 as the container's interface? If so, I think that's 
going to cause issues. You need an interface behind (in front?) bond0. 

Here is my interfaces - you're going to need some device between bond0 and LXC, 
though. 


lai@R610-LXD1-Dev-DMZ:~$ cat /etc/network/interfaces 
# This file describes the network interfaces available on your system 
# and how to activate them. For more information, see interfaces(5). 

source /etc/network/interfaces.d/* 

# The loopback network interface 
auto lo 
iface lo inet loopback 

auto eno1 
iface eno1 inet manual 
bond-master bond0 

auto eno2 
iface eno2 inet manual 
bond-master bond0 

auto eno3 
iface eno3 inet manual 
bond-master bond0 

auto eno4 
iface eno4 inet manual 
bond-master bond0 

auto bond0 
iface bond0 inet manual 
bond-mode 4 
bond-slaves none 
bond-miimon 100 
bond-lacp-rate 1 
bond-downdelay 200 
bond-updelay 200 
bond-xmit-hash-policy layer2+3 

auto br0 
iface br0 inet static 
bridge_ports bond0 
bridge_maxwait 10 
address 10.1.1.139 
netmask 255.255.0.0 
broadcast 10.1.255.255 
network 10.1.0.0 
gateway 10.1.1.5 
dns-nameservers 10.1.1.84 8.8.8.8 


From: "Andrey Repin" <anrdae...@yandex.ru> 
To: "lxc-users" <lxc-users@lists.linuxcontainers.org>, "lxc-users" 
<lxc-users@lists.linuxcontainers.org> 
Sent: Wednesday, November 22, 2017 5:25:11 PM 
Subject: Re: [lxc-users] Bonding inside container? Or any other ideas? 

Greetings, Lai Wei-Hwa! 



I'm not sure I follow. I have multiple servers running Bond Mode 4 (for 
LACP/802.3ad). 



802.3ad (mode 4) requires switch support. 
Unfortunately, my switch is "managed", but does not offer this essential 
specification. 


BQ_BEGIN
I then created a bridge, br0 which becomes the main (only) interface. 

BQ_END

After having a hard time with some of the configurations, I avoid brctl like 
plague. It may be a tool to bridge physical interfaces, but for single host it 
is an extreme overhead. 


-- 
With best regards, 
Andrey Repin 
Thursday, November 23, 2017 01:20:52 

Sorry for my terrible english... 

___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Snap 2.20 - Default Text Editor

2017-11-22 Thread Lai Wei-Hwa
Ron, 

You're telling me the normal way to set your default editor. I know how to do 
this. The problem is that my default editor is nano, which I want, but when I:
$ lxc profile edit default 
It still opens in VI. This is an LXC issue. 

Thanks! 
Lai

- Original Message -
From: "Ron Kelley" <rkelley...@gmail.com>
To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
Sent: Tuesday, November 21, 2017 10:07:32 PM
Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor

sudo update-alternatives --config editor

http://vim.wikia.com/wiki/Set_Vim_as_your_default_editor_for_Unix





> On Nov 21, 2017, at 7:49 PM, Lai Wei-Hwa <wh...@robco.com> wrote:
> 
> Thanks, but that's the problem, it's still opening in VI
> 
> Thanks! 
> Lai
> 
> - Original Message -
> From: "Björn Fischer" <b...@cebitec.uni-bielefeld.de>
> To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
> Sent: Tuesday, November 21, 2017 7:46:58 PM
> Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor
> 
> Hi,
> 
>> $ lxc profile edit default
>> Opens in VI even though my editor is nano (save the flaming)
>> 
>> How can we edit the default editor?
> 
> $ EDITOR=nano
> $ export EDITOR
> 
> Cheers,
> 
> Björn
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Snap 2.20 - Default Text Editor

2017-11-21 Thread Lai Wei-Hwa
Thanks, but that's the problem, it's still opening in VI

Thanks! 
Lai

- Original Message -
From: "Björn Fischer" 
To: "lxc-users" 
Sent: Tuesday, November 21, 2017 7:46:58 PM
Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor

Hi,

> $ lxc profile edit default
> Opens in VI even though my editor is nano (save the flaming)
> 
> How can we edit the default editor?

$ EDITOR=nano
$ export EDITOR

Cheers,

Björn
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Snap 2.20 - Default Text Editor

2017-11-21 Thread Lai Wei-Hwa

$ lxc profile edit default 
Opens in VI even though my editor is nano (save the flaming) 

How can we edit the default editor? 

Best Regards, 

Lai Wei-Hwa 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Bonding inside container? Or any other ideas?

2017-11-21 Thread Lai Wei-Hwa
I'm not sure I follow. I have multiple servers running Bond Mode 4 (for 
LACP/802.3ad). I then created a bridge, br0 which becomes the main (only) 
interface. I'm using flat networking with no NATS between containers and edited 
the profiles to use br0. Everything works for me. I can't speak to the other 
bond modes, though. 

Thanks! 
Lai

- Original Message -
From: "Andrey Repin" 
To: "lxc-users" 
Sent: Tuesday, November 21, 2017 6:38:55 PM
Subject: [lxc-users] Bonding inside container? Or any other ideas?

Greetings, All!

Some time ago I've managed to install a second network card into one of
my servers, and have been experimenting with bonding on host.
The field is: a host with two cards in one bond0 interface.
A number of containers sitting as macvlans on top of bond0.

Some success was achieved with bond mode 5 (balance-tlb) - approx 2:1 TX
counts with five clients, but all upload is weighted on one network card.

Attempt to change the mode to balance-alb(mode 6) immediately broke the
loading of roaming Windows profiles, the issue immediately disappear once I
switch back to mode 5.

I suppose this happens because bonding balancer creates havoc with macvlan and
own bonding MAC addresses, which the network can't easily solve, or Windows
clients got picky and refuse to load stuff from randomly changed source.

While I could turn back to internal LXC bridge and route requests between it
and bond0 on host to dissolve the MAC issue, I'd like to see if there's a more
direct solution could be found, such as creating a bonding inside container?

Or if not, is there any other way to use bonding and maintain broadcast
visibility range between containers and the rest of the network?


-- 
With best regards,
Andrey Repin
Wednesday, November 22, 2017 02:23:22

Sorry for my terrible english...

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Using a mounted drive to handle storage pool

2017-11-21 Thread Lai Wei-Hwa
That seems to work!

I still get the message:
error: Unable to talk to LXD: Get http://unix.socket/1.0: dial unix 
/var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory

But if I run it again, it inits.  Thanks, Ron.

Thanks! 
Lai

- Original Message -
From: "Ron Kelley" <rkelley...@gmail.com>
To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
Sent: Tuesday, November 21, 2017 6:26:30 PM
Subject: Re: [lxc-users] Using a mounted drive to handle storage pool

Perhaps you should use “bind” mount instead of symbolic links here?

mount -o bind /storage/lxd /var/snap/lxd

You probably also need to make sure survives a reboot.  



-Ron




> On Nov 21, 2017, at 5:47 PM, Lai Wei-Hwa <wh...@robco.com> wrote:
> 
> In the following scenario, I:
> 
> $ sudo mount /dev/sdb /storage
> 
> Then, when I do:
> 
> $ sudo ln -s /storage/lxd lxd
> $ snap install lxd
> $ sudo lxd init
> error: Unable to talk to LXD: Get http://unix.socket/1.0: dial unix 
> /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory
> 
> 
> 
> Thanks! 
> Lai
> 
> From: "Lai Wei-Hwa" <wh...@robco.com>
> To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
> Sent: Tuesday, November 21, 2017 1:37:18 PM
> Subject: [lxc-users] Using a mounted drive to handle storage pool
> 
> I've currently migrated LXD from canonical PPA to Snap. 
> 
> I have 2 RAIDS:
>   • /dev/sda - ext4 (this is root device)
>   • /dev/sdb - brtfs (where I want my pool to be with the containers and 
> snapshots)
> How/where should I mount my btrfs device? What's the best practice in having 
> the pool be in a non-root device? 
> 
> There are a few approaches I can see
>   • mount /dev/sdb to /var/snap/lxd (or /var/lib/lxd - if you're using 
> PPA) ... then: lxd init
>   • mount /dev/sdb to /storage and: ln -s /storage/lxd /var/snap/lxd ... 
> then: lxd init
>   • lxd init and choose existing block device /dev/sdb
> Whats the best practice and why?
> 
> Also, I'd love it if LXD could make this a little easier and let users more 
> easily define where the storage pool will be located. 
> 
> Best Regards,
> 
> Lai 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Using a mounted drive to handle storage pool

2017-11-21 Thread Lai Wei-Hwa
In the following scenario, I: 

$ sudo mount /dev/sdb /storage 

Then, when I do: 

$ sudo ln -s /storage/lxd lxd 
$ snap install lxd 
$ sudo lxd init 
error: Unable to talk to LXD: Get http://unix.socket/1.0: dial unix 
/var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory 



Thanks! 
Lai 


From: "Lai Wei-Hwa" <wh...@robco.com> 
To: "lxc-users" <lxc-users@lists.linuxcontainers.org> 
Sent: Tuesday, November 21, 2017 1:37:18 PM 
Subject: [lxc-users] Using a mounted drive to handle storage pool 

I've currently migrated LXD from canonical PPA to Snap. 

I have 2 RAIDS: 


* /dev/sda - ext4 (this is root device) 
* /dev/sdb - brtfs (where I want my pool to be with the containers and 
snapshots) 

How/where should I mount my btrfs device? What's the best practice in having 
the pool be in a non-root device? 

There are a few approaches I can see 


1. mount /dev/sdb to /var/snap/lxd (or /var/lib/lxd - if you're using PPA) 
... then: lxd init 
2. mount /dev/sdb to /storage and: ln -s /storage/lxd /var/snap/lxd ... 
then: lxd init 
3. lxd init and choose existing block device /dev/sdb 

Whats the best practice and why? 

Also, I'd love it if LXD could make this a little easier and let users more 
easily define where the storage pool will be located. 

Best Regards, 

Lai 

___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Using a mounted drive to handle storage pool

2017-11-21 Thread Lai Wei-Hwa
I've currently migrated LXD from canonical PPA to Snap. 

I have 2 RAIDS: 


* /dev/sda - ext4 (this is root device) 
* /dev/sdb - brtfs (where I want my pool to be with the containers and 
snapshots) 

How/where should I mount my btrfs device? What's the best practice in having 
the pool be in a non-root device? 

There are a few approaches I can see 


1. mount /dev/sdb to /var/snap/lxd (or /var/lib/lxd - if you're using PPA) 
... then: lxd init 
2. mount /dev/sdb to /storage and: ln -s /storage/lxd /var/snap/lxd ... 
then: lxd init 
3. lxd init and choose existing block device /dev/sdb 

Whats the best practice and why? 

Also, I'd love it if LXD could make this a little easier and let users more 
easily define where the storage pool will be located. 

Best Regards, 

Lai 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users