Greetings,

----- Original Message -----
> and also "ext4 over ploop over ext4" wasting disk space as overhead.

That is the case for all disk-file-as-disk-image containers and not unique to 
ploop.  You said if you can't use OpenVZ and ZFS together (in the future maybe) 
then you'd switch to KVM... at which point you'd probably being using qcow2.  
I'm just saying. :)

> But live migration code from OpenVZ kernel is not included into
> mainline kernel. And OpenVZ developers need instead create CRIU.
> 
> This means what something is wrong with current live migration code?
> Or how you can explain, - why live migration not in mainline kernel?

The OpenVZ legacy Chkpoint / Restore code isn't in mainline kernel just as 
OpenVZ is not in mainline kernel.  They wanted to give mainline kernel 
checkpoint and restore so they created CRIU.  Please note that most of CRIU is 
"In Userspace" although they did have to add some kernel-based support... but 
as I understand it, that was minimal.

BTW, for the EL7-based Virtuozzo 7 kernel, they are switching to CRIU.

With the original implementation, I believe they tried to get it into the 
mainline kernel (and there was another one not from OpenVZ and possibly a 
second one... from Google I believe) that also tried to get into mainline after 
putting one or two years into the process... that failed.  Luckily they were 
successful with CRIU.

> Yes, you are right, this is not very reflective,
> but in first approximation - you can easy evaluate
> complexity of code by past bugreports, also evaluating
> code quality by cound of vulnerabilities is common practice,
> for example, postfix scored as high code quality mail server
> and sendmail/exim as low code quality mail servers
> only on history of vulnerabilities in the past.
> 
> So, why I can't use the same practice for estimating
> code quality and code stability of different parts of OpenVZ kernel?
> 
> OpenVZ live migration is very hard code, very hard task,
> and this is potentially many places for nontrivial bugs here.

I'll grant you that.  Checkpoint / Restore (which enables live migration) is a 
very difficult task and they worked really hard to make it work... and it has 
been maintained now for 7+ years.  Kudos to them.  They also had the expertise 
to create CRIU and get it into the kernel.  My point here is that if anyone can 
do it, the Odin/Parallels/OpenVZ/SWsoft guys can. :)

Yes there are still some areas that are more difficult in a container... and 
some of the more advanced cases are more difficult and perhaps not possible to 
migrate.  I'm not aware of any specific cases but I know at one point, 
migrating a container with NFS wasn't working very well.  I'm not sure on the 
status of that now.  With that nginx bug, it looks like that is another case... 
hopefully they'll figure that out.  The more cases that are tried and 
reported... the more areas they can fix things and get more stuff working.  My 
needs have been fairly simple and as a result... I've had very smooth 
sailing... but I know I many not be typical.

> Also, only one container has several white IPs, all other
> containers uses IP from 172.16.0.0 - 172.31.255.255 subnet,
> which exists only inside OpneVZ servers - all other network
> know nothing about 172.16/12 subnet and how to route IP from it.
> 
> So, even I want use live migration - I can't do it right now.

I can definitely understand that and I certainly wasn't trying to twist your 
arm and make you use migration... but surely you realize a lot of people do use 
it and it is an important feature.

On my work setup I can migrate but on my hobby / home setup I can't... because 
I only have one OpenVZ host node.  I wish I had two though.

Thanks for your considerate conversation / discussion.  Much appreciated.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

Reply via email to