Re: Running KVM in addition to LXC on local LXD CLOUD

2017-07-02 Thread John Meinel
Generally you have to configure the profile of the containers (either by
configuring the default profiles or by altering the configuration of a
single container and then restarting that container).
If there are particular modules that you know you will need then you can
use "linux.kernel_modules" to ensure that those modules are loaded before
the container starts (because you cannot load modules from inside the
container).

I'm not entirely sure how you get /lib/modules to be present. It may be
that setting kernel_modules is sufficient and it will cause those
directories to be available, or more likely you'll want to do something
like bind mount the host's /lib/modules into the container (which you can
also do via the profile).

This is more a question about how to operate LXD than Juju, so I'm a little
unclear about some of the details. I'll refer you to some nice blog posts
from Stephane Graber who drives LXD development
  https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

Openstack happens to be one of the more complex things to run inside
containers, so tends to have a lot of details about loading modules, etc:
  https://stgraber.org/2016/10/26/lxd-2-0-lxd-and-openstack-1112/

John
=:->


On Sun, Jul 2, 2017 at 4:56 PM, N. S.  wrote:

> Hi again,
>
> Another challenge related to same issue of empty /lib/modules from another
> LXC.
>
> make[4]: *** /lib/modules/4.8.0-56-lowlatency/*build*: No such file or
> directory.  Stop.
> CMakeFiles/ue_ip.dir/build.make:60: recipe for target 'ue_ip.ko' failed
> make[3]: *** [ue_ip.ko] Error 2
> CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/ue_ip.dir/all'
> failed
> make[2]: *** [CMakeFiles/ue_ip.dir/all] Error 2
> CMakeFiles/Makefile2:74: recipe for target 'CMakeFiles/ue_ip.dir/rule'
> failed
> make[1]: *** [CMakeFiles/ue_ip.dir/rule] Error 2
> Makefile:118: recipe for target 'ue_ip' failed
> make: *** [ue_ip] Error 2
>
>
> root@juju-0f8be6-10:/srv/openairinterface5g/cmake_targets# ls -l
> /lib/modules/
>
> total 0
>
> So,
>
> Is it possible to have the contents of the /lib/modules of the host
> inherited in each LXC?
>
> Thanks,
>
> BR,
> NS
>
>
>
> On Jul 2, 2017 3:47 PM, "N. S."  wrote:
>
> Hi Nicholas,
>
>
> Thanks, you are right.
>
> Yes , i used
>
>  sudo apt-get install --install-recommends linux-generic-hwe-16.04
>
>
>
> However, I have the following problem:
> I have not the host machine running 4.8 which is great, but the LXC
> underneath it have empty /lib/modules.
>
> In fact,
>
>
> * ON HOST Machine: *
> root@ns-HP:/lib/modules/4.8.0-56-lowlatency# ll
>
> total 4824
> drwxr-xr-x  5 root root4096 Jun 29 01:02 ./
> drwxr-xr-x  8 root root4096 Jun 29 01:01 ../
> lrwxrwxrwx  1 root root  42 Jun 14 19:40 build ->
> /usr/src/linux-headers-4.8.0-56-lowlatency/
> drwxr-xr-x  2 root root4096 Jun 14 19:27 initrd/
> drwxr-xr-x 14 root root4096 Jun 29 01:01 kernel/
> -rw-r--r--  1 root root 1133746 Jun 29 01:02 modules.alias
> -rw-r--r--  1 root root 1121947 Jun 29 01:02 modules.alias.bin
> -rw-r--r--  1 root root7271 Jun 14 19:25 modules.builtin
> -rw-r--r--  1 root root9059 Jun 29 01:02 modules.builtin.bin
> -rw-r--r--  1 root root  504755 Jun 29 01:02 modules.dep
> -rw-r--r--  1 root root  717645 Jun 29 01:02 * modules.dep.bin*
> -rw-r--r--  1 root root 285 Jun 29 01:02 modules.devname
> -rw-r--r--  1 root root  190950 Jun 14 19:25 modules.order
> -rw-r--r--  1 root root 386 Jun 29 01:02 modules.softdep
> -rw-r--r--  1 root root  543694 Jun 29 01:02 modules.symbols
> -rw-r--r--  1 root root  664280 Jun 29 01:02 modules.symbols.bin
> drwxr-xr-x  3 root root4096 Jun 29 01:01 vdso/
>
>
> root@ns-HP:/lib/modules/4.8.0-56-lowlatency# du -sh .
>
> *224M. *
>
>
> EXCERPT from within the LXC container called oai-spgw-0
> I can see the following error about empty /lib/modules.
>
> *rmmod: ERROR: ../libkmod/libkmod.c:514 lookup_builtin_file() could not
> open builtin file '/lib/modules/4.8.0-56-lowlatency/modules.builtin.bin'*
>
>
>
> * rmmod: ERROR: Module gtp is not currently loaded modprobe: ERROR:
> ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file
> '/lib/modules/4.8.0-56-lowlatency/modules.dep.bin' modprobe: FATAL: Module
> gtp not found in directory /lib/modules/4.8.0-56-lowlatency 000142
> 2:497894 7FF1A14AE700 CRITI GTPv2-
> air-cn/src/gtpv1-u/gtpv1u_task.c:0105ERROR in loading gtp kernel module
> (check if built in kernel)*
>
> root@oai-spgw-0:~# uname -a
> Linux *oai-spgw-0 4.8.0-56-lowlatency* #61~16.04.1-Ubuntu SMP PREEMPT Wed
> Jun 14 13:24:54 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
>
> root@*oai-spgw-0:/lib/modules# * ll
> total 3
> drwxr-xr-x  2 root root  2 Jun 19 23:52 ./
> drwxr-xr-x 22 root root 26 Jun 29 11:55 ../
>
> root@oai-spgw-0:/lib/modules#
>
>
> Is it possible to have the contents of the /lib/modules (224M) from the
> host to the sub-jacent LXC?
>
> Please advise.
>
> THanks
> BR,
> NS
>
> On Jun 29, 

Re: Running KVM in addition to LXC on local LXD CLOUD

2017-07-02 Thread N. S.
Hi again,

Another challenge related to same issue of empty /lib/modules from another LXC.

make[4]: *** /lib/modules/4.8.0-56-lowlatency/build: No such file or directory. 
 Stop.
CMakeFiles/ue_ip.dir/build.make:60: recipe for target 'ue_ip.ko' failed
make[3]: *** [ue_ip.ko] Error 2
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/ue_ip.dir/all' failed
make[2]: *** [CMakeFiles/ue_ip.dir/all] Error 2
CMakeFiles/Makefile2:74: recipe for target 'CMakeFiles/ue_ip.dir/rule' failed
make[1]: *** [CMakeFiles/ue_ip.dir/rule] Error 2
Makefile:118: recipe for target 'ue_ip' failed
make: *** [ue_ip] Error 2


root@juju-0f8be6-10:/srv/openairinterface5g/cmake_targets# ls -l /lib/modules/

total 0

So,

Is it possible to have the contents of the /lib/modules of the host inherited 
in each LXC?

Thanks,

BR,
NS



On Jul 2, 2017 3:47 PM, "N. S."  wrote:
Hi Nicholas,


Thanks, you are right.

Yes , i used


 sudo apt-get install --install-recommends linux-generic-hwe-16.04



However, I have the following problem:
I have not the host machine running 4.8 which is great, but the LXC underneath 
it have empty /lib/modules.

In fact,

ON HOST Machine:

root@ns-HP:/lib/modules/4.8.0-56-lowlatency# ll

total 4824
drwxr-xr-x  5 root root4096 Jun 29 01:02 ./
drwxr-xr-x  8 root root4096 Jun 29 01:01 ../
lrwxrwxrwx  1 root root  42 Jun 14 19:40 build -> 
/usr/src/linux-headers-4.8.0-56-lowlatency/
drwxr-xr-x  2 root root4096 Jun 14 19:27 initrd/
drwxr-xr-x 14 root root4096 Jun 29 01:01 kernel/
-rw-r--r--  1 root root 1133746 Jun 29 01:02 modules.alias
-rw-r--r--  1 root root 1121947 Jun 29 01:02 modules.alias.bin
-rw-r--r--  1 root root7271 Jun 14 19:25 modules.builtin
-rw-r--r--  1 root root9059 Jun 29 01:02 modules.builtin.bin
-rw-r--r--  1 root root  504755 Jun 29 01:02 modules.dep
-rw-r--r--  1 root root  717645 Jun 29 01:02 modules.dep.bin
-rw-r--r--  1 root root 285 Jun 29 01:02 modules.devname
-rw-r--r--  1 root root  190950 Jun 14 19:25 modules.order
-rw-r--r--  1 root root 386 Jun 29 01:02 modules.softdep
-rw-r--r--  1 root root  543694 Jun 29 01:02 modules.symbols
-rw-r--r--  1 root root  664280 Jun 29 01:02 modules.symbols.bin
drwxr-xr-x  3 root root4096 Jun 29 01:01 vdso/


root@ns-HP:/lib/modules/4.8.0-56-lowlatency# du -sh .
224M.


EXCERPT from within the LXC container called oai-spgw-0
I can see the following error about empty /lib/modules.

rmmod: ERROR: ../libkmod/libkmod.c:514 lookup_builtin_file() could not open 
builtin file '/lib/modules/4.8.0-56-lowlatency/modules.builtin.bin'
rmmod: ERROR: Module gtp is not currently loaded
modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open 
moddep file '/lib/modules/4.8.0-56-lowlatency/modules.dep.bin'
modprobe: FATAL: Module gtp not found in directory 
/lib/modules/4.8.0-56-lowlatency
000142 2:497894 7FF1A14AE700 CRITI GTPv2- 
air-cn/src/gtpv1-u/gtpv1u_task.c:0105ERROR in loading gtp kernel module 
(check if built in kernel)

root@oai-spgw-0:~# uname -a
Linux oai-spgw-0 4.8.0-56-lowlatency #61~16.04.1-Ubuntu SMP PREEMPT Wed Jun 14 
13:24:54 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

root@oai-spgw-0:/lib/modules# ll
total 3
drwxr-xr-x  2 root root  2 Jun 19 23:52 ./
drwxr-xr-x 22 root root 26 Jun 29 11:55 ../

root@oai-spgw-0:/lib/modules#


Is it possible to have the contents of the /lib/modules (224M) from the host to 
the sub-jacent LXC?

Please advise.

THanks
BR,
NS

On Jun 29, 2017 3:45 AM, Nicholas Skaggs  wrote:
I'm not sure how you installed the hwe kernel, but you should simply


 sudo apt-get install --install-recommends linux-generic-hwe-16.04

On Thu, Jun 29, 2017 at 3:15 AM, N. S. 
> wrote:

Hi Nicholas and Andrew,


Thank you both for your help.

@Nicholas,
I downloaded the kernel 4.8 and 4.11 Low-latency (Requirement by the 
application) and installed it
sudo dpkg -i linux-header-4.8* linux-image-4.8*
sudo update-grub

But I am faced with a challenge that ZFS is not starting UP during boot. Cf 
attached.

Further digging, the status of the services is as follows:



$ systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; static; vendor 
preset: enabled)
   Active: failed (Result: exit-code) since Wed 2017-06-28 19:54:33 EEST; 12min 
ago
  Process: 1032 ExecStartPre=/sbin/modprobe zfs (code=exited, status=1/FAILURE)

Jun 28 19:54:33 ns-HP systemd[1]: Starting Import ZFS pools by cache file...
Jun 28 19:54:33 ns-HP modprobe[1032]: modprobe: FATAL: Module zfs not found in 
directory /lib/modules/4.8.0-040800-lowlatency
Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: Control process 
exited, code=exited status=1
Jun 28 19:54:33 ns-HP systemd[1]: Failed to start Import ZFS pools by cache 
file.
Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: 

Re: Running KVM in addition to LXC on local LXD CLOUD

2017-07-02 Thread N. S.
Hi Nicholas,


Thanks, you are right.

Yes , i used


 sudo apt-get install --install-recommends linux-generic-hwe-16.04



However, I have the following problem:
I have not the host machine running 4.8 which is great, but the LXC underneath 
it have empty /lib/modules.

In fact,

ON HOST Machine:

root@ns-HP:/lib/modules/4.8.0-56-lowlatency# ll

total 4824
drwxr-xr-x  5 root root4096 Jun 29 01:02 ./
drwxr-xr-x  8 root root4096 Jun 29 01:01 ../
lrwxrwxrwx  1 root root  42 Jun 14 19:40 build -> 
/usr/src/linux-headers-4.8.0-56-lowlatency/
drwxr-xr-x  2 root root4096 Jun 14 19:27 initrd/
drwxr-xr-x 14 root root4096 Jun 29 01:01 kernel/
-rw-r--r--  1 root root 1133746 Jun 29 01:02 modules.alias
-rw-r--r--  1 root root 1121947 Jun 29 01:02 modules.alias.bin
-rw-r--r--  1 root root7271 Jun 14 19:25 modules.builtin
-rw-r--r--  1 root root9059 Jun 29 01:02 modules.builtin.bin
-rw-r--r--  1 root root  504755 Jun 29 01:02 modules.dep
-rw-r--r--  1 root root  717645 Jun 29 01:02 modules.dep.bin
-rw-r--r--  1 root root 285 Jun 29 01:02 modules.devname
-rw-r--r--  1 root root  190950 Jun 14 19:25 modules.order
-rw-r--r--  1 root root 386 Jun 29 01:02 modules.softdep
-rw-r--r--  1 root root  543694 Jun 29 01:02 modules.symbols
-rw-r--r--  1 root root  664280 Jun 29 01:02 modules.symbols.bin
drwxr-xr-x  3 root root4096 Jun 29 01:01 vdso/


root@ns-HP:/lib/modules/4.8.0-56-lowlatency# du -sh .
224M.


EXCERPT from within the LXC container called oai-spgw-0
I can see the following error about empty /lib/modules.

rmmod: ERROR: ../libkmod/libkmod.c:514 lookup_builtin_file() could not open 
builtin file '/lib/modules/4.8.0-56-lowlatency/modules.builtin.bin'
rmmod: ERROR: Module gtp is not currently loaded
modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open 
moddep file '/lib/modules/4.8.0-56-lowlatency/modules.dep.bin'
modprobe: FATAL: Module gtp not found in directory 
/lib/modules/4.8.0-56-lowlatency
000142 2:497894 7FF1A14AE700 CRITI GTPv2- 
air-cn/src/gtpv1-u/gtpv1u_task.c:0105ERROR in loading gtp kernel module 
(check if built in kernel)

root@oai-spgw-0:~# uname -a
Linux oai-spgw-0 4.8.0-56-lowlatency #61~16.04.1-Ubuntu SMP PREEMPT Wed Jun 14 
13:24:54 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

root@oai-spgw-0:/lib/modules# ll
total 3
drwxr-xr-x  2 root root  2 Jun 19 23:52 ./
drwxr-xr-x 22 root root 26 Jun 29 11:55 ../

root@oai-spgw-0:/lib/modules#


Is it possible to have the contents of the /lib/modules (224M) from the host to 
the sub-jacent LXC?

Please advise.

THanks
BR,
NS

On Jun 29, 2017 3:45 AM, Nicholas Skaggs  wrote:
I'm not sure how you installed the hwe kernel, but you should simply


 sudo apt-get install --install-recommends linux-generic-hwe-16.04

On Thu, Jun 29, 2017 at 3:15 AM, N. S. 
> wrote:

Hi Nicholas and Andrew,


Thank you both for your help.

@Nicholas,
I downloaded the kernel 4.8 and 4.11 Low-latency (Requirement by the 
application) and installed it
sudo dpkg -i linux-header-4.8* linux-image-4.8*
sudo update-grub

But I am faced with a challenge that ZFS is not starting UP during boot. Cf 
attached.

Further digging, the status of the services is as follows:



$ systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; static; vendor 
preset: enabled)
   Active: failed (Result: exit-code) since Wed 2017-06-28 19:54:33 EEST; 12min 
ago
  Process: 1032 ExecStartPre=/sbin/modprobe zfs (code=exited, status=1/FAILURE)

Jun 28 19:54:33 ns-HP systemd[1]: Starting Import ZFS pools by cache file...
Jun 28 19:54:33 ns-HP modprobe[1032]: modprobe: FATAL: Module zfs not found in 
directory /lib/modules/4.8.0-040800-lowlatency
Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: Control process 
exited, code=exited status=1
Jun 28 19:54:33 ns-HP systemd[1]: Failed to start Import ZFS pools by cache 
file.
Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: Unit entered failed 
state.
Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: Failed with result 
'exit-code'.




ns@ns-HP:/usr/src/linux-headers-4.4.0-81$ systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; static; vendor 
preset: enabled)
   Active: failed (Result: exit-code) since Wed 2017-06-28 19:54:34 EEST; 13min 
ago
  Process: 1042 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
 Main PID: 1042 (code=exited, status=1/FAILURE)

Jun 28 19:54:33 ns-HP systemd[1]: Starting Mount ZFS filesystems...
Jun 28 19:54:34 ns-HP zfs[1042]: The ZFS modules are not loaded.
Jun 28 19:54:34 ns-HP zfs[1042]: Try running '/sbin/modprobe zfs' as root to 
load them.
Jun 28 19:54:34 ns-HP systemd[1]: zfs-mount.service: Main process exited, 
code=exited, status=1/FAILURE
Jun 28 

Re: Running KVM in addition to LXC on local LXD CLOUD

2017-06-28 Thread Nicholas Skaggs
I'm not sure how you installed the hwe kernel, but you should simply


 sudo apt-get install --install-recommends linux-generic-hwe-16.04


On Thu, Jun 29, 2017 at 3:15 AM, N. S.  wrote:

> Hi Nicholas and Andrew,
>
>
> Thank you both for your help.
>
> @Nicholas,
> I downloaded the kernel 4.8 and 4.11 Low-latency (Requirement by the
> application) and installed it
> sudo dpkg -i linux-header-4.8* linux-image-4.8*
> sudo update-grub
>
> But I am faced with a challenge that ZFS is not starting UP during boot.
> Cf attached.
>
> Further digging, the status of the services is as follows:
>
>
>
> * $ systemctl status zfs-import-cache.service*
> ● zfs-import-cache.service - Import ZFS pools by cache file
>Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; static;
> vendor preset: enabled)
>Active: failed (Result: exit-code) since Wed 2017-06-28 19:54:33 EEST;
> 12min ago
>   Process: 1032 ExecStartPre=/sbin/modprobe zfs (code=exited,
> status=1/FAILURE)
>
> Jun 28 19:54:33 ns-HP systemd[1]: Starting Import ZFS pools by cache
> file...
> Jun 28 19:54:33 ns-HP modprobe[1032]: modprobe: FATAL: Module zfs not
> found in directory /lib/modules/4.8.0-040800-lowlatency
> Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: Control
> process exited, code=exited status=1
> Jun 28 19:54:33 ns-HP systemd[1]: Failed to start Import ZFS pools by
> cache file.
> Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: Unit entered
> failed state.
> Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: Failed with
> result 'exit-code'.
>
>
>
>
> * ns@ns-HP:/usr/src/linux-headers-4.4.0-81$ systemctl status
> zfs-mount.service*
> ● zfs-mount.service - Mount ZFS filesystems
>Loaded: loaded (/lib/systemd/system/zfs-mount.service; static; vendor
> preset: enabled)
>Active: failed (Result: exit-code) since Wed 2017-06-28 19:54:34 EEST;
> 13min ago
>   Process: 1042 ExecStart=/sbin/zfs mount -a (code=exited,
> status=1/FAILURE)
>  Main PID: 1042 (code=exited, status=1/FAILURE)
>
> Jun 28 19:54:33 ns-HP systemd[1]: Starting Mount ZFS filesystems...
> Jun 28 19:54:34 ns-HP zfs[1042]: The ZFS modules are not loaded.
> Jun 28 19:54:34 ns-HP zfs[1042]: Try running '/sbin/modprobe zfs' as root
> to load them.
> Jun 28 19:54:34 ns-HP systemd[1]: zfs-mount.service: Main process exited,
> code=exited, status=1/FAILURE
> Jun 28 19:54:34 ns-HP systemd[1]: Failed to start Mount ZFS filesystems.
> Jun 28 19:54:34 ns-HP systemd[1]: zfs-mount.service: Unit entered failed
> state.
> Jun 28 19:54:34 ns-HP systemd[1]: zfs-mount.service: Failed with result
> 'exit-code'.
> ns@ns-HP:/usr/src/linux-headers-4.4.0-81$
>
> I tried to run modprobe zfs as advised, but:
>
> $ sudo -i
> [sudo] password for ns:
> root@ns-HP:~*# /sbin/modprobe zfs*
> modprobe: FATAL: Module zfs not found in directory
> /lib/modules/4.8.0-040800-lowlatency
> root@ns-HP:~#
>
>
>
> I know this might be not directly related to JUJU, but to ubuntu kernel,
> but I appreciate if  you could help.
>
>
> Thanks,
>
> BR,
>
>
> *NS *
>
>
>
> *Nicholas Skaggs*nicholas.skaggs at canonical.com
> 
> *Tue Jun 27 19:51:08 UTC 2017*
>
> If it's possible, I would simply run the hwe kernel on xenial which
> provides 4.8+. Read more about running an updated stack here:
> https://wiki.ubuntu.com/Kernel/LTSEnablementStack
>
> This would solve your specific problem without worrying about running
> kvm's.
>
> Kernel/LTSEnablementStack - Ubuntu Wiki
> 
> wiki.ubuntu.com
> Ubuntu Kernel Release Schedule. The following is a generic view of the
> Ubuntu release schedule, the kernels delivered, and the support time frames.
>
>
>
>
> --
> *From:* Andrew Wilkins 
> *Sent:* Sunday, June 25, 2017 10:42 PM
> *To:* N. S.; juju@lists.ubuntu.com
> *Subject:* Re: Running KVM in addition to LXC on local LXD CLOUD
>
> On Sat, Jun 24, 2017 at 9:14 PM N. S.  wrote:
>
>> Hi,
>>
>>
>> I am running 10 machines on local LXD cloud, and it's fine.
>>
>> My host is Ubuntu 16.04, kernel 4.4.0-81.
>>
>> However, I have the following challenge:
>> One of the machines (M0) stipulates a kernel 4.7+
>>
>>
>> As it's known, unlike KVM, LXC makes use of same kernel of the host
>> system and in this case (4.4.0-81) breaching thus the stipulation of M0
>> (4.7+).
>>
>>
>> I have read that starting Juju 2.0, KVM is no more supported.
>>
>
> Juju still supports kvm, but the old "local" provider which supported
> lxc/kvm is gone.
>
> You could run a kvm container from within a lxd machine with the right
> apparmor settings. Probably the most straight forward thing to do, though,
> would be to create a KVM VM yourself, install Ubuntu on it, and then
> manually 

Re: Running KVM in addition to LXC on local LXD CLOUD

2017-06-27 Thread Nicholas Skaggs
If it's possible, I would simply run the hwe kernel on xenial which
provides 4.8+. Read more about running an updated stack here:

https://wiki.ubuntu.com/Kernel/LTSEnablementStack

This would solve your specific problem without worrying about running
kvm's.

On Jun 24, 2017 11:14 PM, "N. S."  wrote:

> Hi,
>
>
> I am running 10 machines on local LXD cloud, and it's fine.
>
> My host is Ubuntu 16.04, kernel 4.4.0-81.
>
> However, I have the following challenge:
> One of the machines (M0) stipulates a kernel 4.7+
>
>
> As it's known, unlike KVM, LXC makes use of same kernel of the host system
> and in this case (4.4.0-81) breaching thus the stipulation of M0 (4.7+).
>
>
> I have read that starting Juju 2.0, KVM is no more supported.
>
>
> How could I prepare the stipulation of M0?
>
> Thanks for your help
> BR,
> Nazih
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Running KVM in addition to LXC on local LXD CLOUD

2017-06-25 Thread Andrew Wilkins
On Sat, Jun 24, 2017 at 9:14 PM N. S.  wrote:

> Hi,
>
>
> I am running 10 machines on local LXD cloud, and it's fine.
>
> My host is Ubuntu 16.04, kernel 4.4.0-81.
>
> However, I have the following challenge:
> One of the machines (M0) stipulates a kernel 4.7+
>
>
> As it's known, unlike KVM, LXC makes use of same kernel of the host system
> and in this case (4.4.0-81) breaching thus the stipulation of M0 (4.7+).
>
>
> I have read that starting Juju 2.0, KVM is no more supported.
>

Juju still supports kvm, but the old "local" provider which supported
lxc/kvm is gone.

You could run a kvm container from within a lxd machine with the right
apparmor settings. Probably the most straight forward thing to do, though,
would be to create a KVM VM yourself, install Ubuntu on it, and then
manually provision it using "juju add-machine ssh:".


> How could I prepare the stipulation of M0?
>
> Thanks for your help
> BR,
> Nazih
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Running KVM in addition to LXC on local LXD CLOUD

2017-06-24 Thread N. S.
Hi,


I am running 10 machines on local LXD cloud, and it's fine.

My host is Ubuntu 16.04, kernel 4.4.0-81.

However, I have the following challenge:
One of the machines (M0) stipulates a kernel 4.7+


As it's known, unlike KVM, LXC makes use of same kernel of the host system and 
in this case (4.4.0-81) breaching thus the stipulation of M0 (4.7+).


I have read that starting Juju 2.0, KVM is no more supported.


How could I prepare the stipulation of M0?

Thanks for your help
BR,
Nazih

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju