Re: [Users] Virtuozzo containers no longer a supported Virtuozzo product !?

2020-12-04 Thread Jehan Procaccia IMT

Great, sound's better this way !, thanks .
to prevent others from miss leading, maybe that note one the page 
https://www.virtuozzo.com/support/all-products/virtuozzo-containers.html
should be extended with your's "The current version of the product with 
Container technology goes under name "Virtuozzo Hybrid Server 7". and a 
link to it .


Regards .

Jehan

Le 04/12/2020 à 21:42, Konstantin Khorenko a écrit :

Hi guys,

> Please note: Virtuozzo Containers for Linux is no longer a supported 
Virtuozzo product. Users can purchase extended support until September 
2020.


please don't panic,
this sentence is about only old product "VIRTUOZZO CONTAINERS FOR 
LINUX 4.7",

this is not about the technology itself surely. :)

The current version of the product with Container technology goes 
under name "Virtuozzo Hybrid Server 7".


Here you can see the list of products and their lifecycle milestones:
https://www.virtuozzo.com/support/all-products/lifecycle-policies.html

--
Best regards,

Konstantin Khorenko,
Virtuozzo Linux Kernel Team

On 12/04/2020 11:20 PM, jjs - mainphrame wrote:
I hear what you are saying about deploying lxc on any Linux distro. 
That is a strong point.


But for me it's a good tradeoff. Maybe someday lxc/lxd will reach the 
level of openvz, and if so, I'll re-evaluate, but for now I'm fine 
with setting up dedicated openvz boxes, as

I can deploy any Linux distro in a container, or any OS in a VM.

Jake



On Fri, Dec 4, 2020 at 12:09 PM Narcis Garcia <mailto:informat...@actiu.net>> wrote:


    I'm migrating servers from OpenVZ to LXC (by using ctctl) because 
I can

    deploy LXC on any GNU/Linux distro and archichecture.

    BUT: LXC still does not work as optimal as OpenVZ, and OpenVZ is far
    more mature than LXC.



    Narcis Garcia
    El 4/12/20 a les 20:15, jjs - mainphrame ha escrit:
    > I think it's just that virtuozzo is no longer supporting the 
"containers

    > only" solution. The new baseline is "containers and VMs".
    >
    > I agree they might have made that more clear, but it seems 
there's no
    > cause for worry. I've done long term testing with lxc/lxd and 
after

    > various issues, ended up moving all containers to openvz.
    >
    > The ability to do VMs is a plus, for instance if I have to hold 
my nose

    > and spin up a windows VM for testing.
    >
    > Jake
    >
    >
    >
    > On Fri, Dec 4, 2020 at 11:06 AM Jehan Procaccia IMT
    > <mailto:jehan.procac...@imtbs-tsp.eu> 
<mailto:jehan.procac...@imtbs-tsp.eu 
<mailto:jehan.procac...@imtbs-tsp.eu>>> wrote:

    >
    > then,  is this a "marketing" miss leading information ? , or
    > Containers (CT) , which are to me the most added value of 
virtuozzo

    > technology is to be terminated ?
    > that should be claryfied by virtuozzo staff.
    >
    > indeed in
    > https://www.virtuozzo.com/products/virtuozzo-hybrid-server.html ,
    > containers => https://www.virtuozzo.com/products/compute.html
    > are mentionned
    > and in
    > 
https://www.virtuozzo.com/fileadmin/user_upload/downloads/Data_Sheets/Virtuozzo7-Platform-DS-EN-Ltr.pdf

    >
    > I hardly defend virtuozzo/openVZ vs proxmox in my community 
because
    > of VZ CTs which are supposed by far better than LXC 
containers (!?)

    >
    > Thanks to prove me right .
    >
    > regards .
    >
    >
    > Le 04/12/2020 à 19:48, jjs - mainphrame a écrit :
    >> That looked strange to me, but after looking at their 
website, it
    >> seems they're just announcing the end of support for old 
product

    >>     lines.
    >>
    >> It looks like "Virtuozzo Hybrid Server" is basically what 
we have

    >> in openvz 7, plus premium features.
    >>
    >> Joe
    >>
    >> On Fri, Dec 4, 2020 at 10:36 AM Jehan Procaccia IMT
    >> <mailto:jehan.procac...@imtbs-tsp.eu>
    >> <mailto:jehan.procac...@imtbs-tsp.eu 
<mailto:jehan.procac...@imtbs-tsp.eu>>> wrote:

    >>
    >> Hello
    >>
    >> defending the added value of virtuozzo containers (CT) 
, one

    >> replied me with :
    >>
    >> 
https://www.virtuozzo.com/support/all-products/virtuozzo-containers.html

    >>
    >> /*Please note*: Virtuozzo Containers for Linux is no 
longer a

    >> supported Virtuozzo product. Users can purchase extended
    >> support until September 2020./
    >>
    >> is this serious !?
    >>
    >> Please let us know .
    >>
    >> Regards .
    >>
 

Re: [Users] Virtuozzo containers no longer a supported Virtuozzo product !?

2020-12-04 Thread Jehan Procaccia IMT
then,  is this a "marketing" miss leading information ? , or Containers 
(CT) , which are to me the most added value of virtuozzo technology is 
to be terminated ?

that should be claryfied by virtuozzo staff.

indeed in 
https://www.virtuozzo.com/products/virtuozzo-hybrid-server.html , 
containers => https://www.virtuozzo.com/products/compute.html

are mentionned
and in 
https://www.virtuozzo.com/fileadmin/user_upload/downloads/Data_Sheets/Virtuozzo7-Platform-DS-EN-Ltr.pdf


I hardly defend virtuozzo/openVZ vs proxmox in my community because of 
VZ CTs which are supposed by far better than LXC containers (!?)


Thanks to prove me right .

regards .


Le 04/12/2020 à 19:48, jjs - mainphrame a écrit :
That looked strange to me, but after looking at their website, it 
seems they're just announcing the end of support for old product lines.


It looks like "Virtuozzo Hybrid Server" is basically what we have in 
openvz 7, plus premium features.


Joe

On Fri, Dec 4, 2020 at 10:36 AM Jehan Procaccia IMT 
mailto:jehan.procac...@imtbs-tsp.eu>> 
wrote:


Hello

defending the added value of virtuozzo containers (CT) , one
replied me with :

https://www.virtuozzo.com/support/all-products/virtuozzo-containers.html
<https://www.virtuozzo.com/support/all-products/virtuozzo-containers.html>

/*Please note*: Virtuozzo Containers for Linux is no longer a
supported Virtuozzo product. Users can purchase extended support
until September 2020./

is this serious !?

Please let us know .

Regards .

___
Users mailing list
Users@openvz.org <mailto:Users@openvz.org>
https://lists.openvz.org/mailman/listinfo/users
<https://lists.openvz.org/mailman/listinfo/users>


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Virtuozzo containers no longer a supported Virtuozzo product !?

2020-12-04 Thread Jehan Procaccia IMT

Hello

defending the added value of virtuozzo containers (CT) , one replied me 
with :


https://www.virtuozzo.com/support/all-products/virtuozzo-containers.html

/*Please note*: Virtuozzo Containers for Linux is no longer a supported 
Virtuozzo product. Users can purchase extended support until 
September 2020./


is this serious !?

Please let us know .

Regards .

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] routing between CT and EBtables

2020-10-05 Thread Jehan Procaccia IMT

Hello

for students labs purpose, we use openvz7 CTs that have multiple 
interface and IPs for network simulation (routing)


we notice that CT default configuration seems to drop packets ( ebtables 
?) that are emitted from a different IP that the one configured for the 
CT itself .


although it might be a crefull behaviour , how can I remove that 
"feature" for that specific purpose of inter routing in CTs .


Thanks


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-06 Thread Jehan Procaccia IMT

Hello

If it can help, what I did so far to try to re-enable dead CTs

# prlctl stop ldap2
Stopping the CT...
Failed to stop the CT: PRL_ERR_VZCTL_OPERATION_FAILED (Details: Cannot 
lock the Container

)
# cat /vz/lock/144dc737-b4e3-4c03-852c-25a6df06cee4.lck
6227
resuming
# ps auwx | grep 6227
root    6227  0.0  0.0  92140  6984 ?    S    15:10   0:00 
/usr/sbin/vzctl resume 144dc737-b4e3-4c03-852c-25a6df06cee4

# kill -9  6227

still cannot stop the CT  (Cannot lock the Container...)


# df |grep 144dc737-b4e3-4c03-852c-25a6df06cee4
/dev/ploop11432p1  10188052   2546636    7100848  27% 
/vz/root/144dc737-b4e3-4c03-852c-25a6df06cee4
none    1048576 0    1048576   0% 
/vz/private/144dc737-b4e3-4c03-852c-25a6df06cee4/dump/Dump/.criu.cgyard.56I2ls

# umount /dev/ploop11432p1

# ploop check -F 
/vz/private/144dc737-b4e3-4c03-852c-25a6df06cee4/root.hdd/root.hds

Reopen rw /vz/private/144dc737-b4e3-4c03-852c-25a6df06cee4/root.hdd/root.hds
Error in ploop_check (check.c:663): Dirty flag is set

# ploop mount 
/vz/private/144dc737-b4e3-4c03-852c-25a6df06cee4/root.hdd/DiskDescriptor.xml
Error in ploop_mount_image (ploop.c:2495): Image 
/vz/private/144dc737-b4e3-4c03-852c-25a6df06cee4/root.hdd/root.hds 
already used by device /dev/ploop11432

# df -H | grep ploop11432
=> nothing

I am lost , any help appreciated  .

Thanks .

Le 06/07/2020 à 15:37, Jehan Procaccia IMT a écrit :


Hello,

I am back to the initial pb related to that post , since I updated to 
/OpenVZ release 7.0.14 (136)  | ///Virtuozzo Linux release 7.8.0 
(609)// , I am also facing CT corrupted status .


I don't see the exact same error as mentioned by Kevin Drysdale below 
(ploop/fsck) , but I am not able to enter certain CT neither can I 
stop them


/[root@olb~]# prlctl stop trans8//
//Stopping the CT...//
//Failed to stop the CT: PRL_ERR_VZCTL_OPERATION_FAILED (Details: 
Cannot lock the Container//

//)//
/

/[root@olb ~]# prlctl enter trans8//
//Unable to get init pid//
//enter into CT failed//
//
//exited from CT 02faecdd-ddb6-42eb-8103-202508f18256/

For those CTs that fail to enter or stop, I noticed that there is a 
2nd device mounted with name ending in /dump/Dump/.criu.cgyard.4EJB8c//

/

/[root@olb ~]# df -H |grep 02faecdd-ddb6-42eb-8103-202508f18256//
///dev/ploop53152p1  11G    2,2G  7,7G  23% 
/vz/root/02faecdd-ddb6-42eb-8103-202508f18256//
//none  537M   0  537M   0% 
/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/dump/Dump/.criu.cgyard.4EJB8c/



//[root@olb ~]# prlctl list | grep 02faecdd-ddb6-42eb-8103-202508f18256//
//{02faecdd-ddb6-42eb-8103-202508f18256}  running 157.159.196.17  CT 
isptrans8//

//

I rebooted the whole hardware node, and since reboot here is the 
related vzctl.log


/2020-07-06T15:10:38+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Removing the stale lock file 
/vz/lock/02faecdd-ddb6-42eb-8103-202508f18256.lck//
//2020-07-06T15:10:38+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Restoring the Container ...//
//2020-07-06T15:10:38+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Mount image: 
/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd //
//2020-07-06T15:10:38+0200 : Opening delta 
/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd/root.hds//
//2020-07-06T15:10:38+0200 : Opening delta 
/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd/root.hds//
//2020-07-06T15:10:38+0200 : Opening delta 
/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd/root.hds//
//2020-07-06T15:10:38+0200 : Adding delta dev=/dev/ploop53152 
img=/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd/root.hds 
(rw)//
//2020-07-06T15:10:39+0200 : Mounted /dev/ploop53152p1 at 
/vz/root/02faecdd-ddb6-42eb-8103-202508f18256 fstype=ext4 
data=',balloon_ino=12' //
//2020-07-06T15:10:39+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Container is mounted//
//2020-07-06T15:10:40+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Setting permissions for 
image=/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd//
//2020-07-06T15:10:40+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Configure memguarantee: 0%//
//2020-07-06T15:18:12+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Unable to get init pid//
//2020-07-06T15:18:12+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : enter into CT failed//
//2020-07-06T15:19:49+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Cannot lock the Container//
//2020-07-06T15:25:33+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Unable to get init pid//
//2020-07-06T15:25:33+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : enter into CT failed/


on another CT failing to enter / stop same kind of logs  + /Error 
(criu /:


/2020-07-06T15:10:38+0200 vzctl : CT 
4ae48335-5b63-475d-8629-c8d742cb0ba0 : Restoring the Container ...//
//2020-07-06T15:10:38+0200 vzctl : CT 
4ae48335-5b63-475d-8629-c8d742cb0ba0 : Mo

Re: [Users] reload or refresh list of CT after move an already registered CT

2020-07-06 Thread Jehan Procaccia IMT
thanks that works fine after prl-disp.service , my "manually 
moved/restored" CT can now run on the second Hardware Node


# systemctl restart prl-disp.service
# prlctl list --all |grep 144dc737-b4e3-4c03-852c-25a6df06cee4
{144dc737-b4e3-4c03-852c-25a6df06cee4}  suspended 192.168.1.1  CT ldap2
# prlctl stop ldap2
# prlctl start ldap2

I hope I'll find a solution to 
https://lists.openvz.org/pipermail/users/2020-July/007928.html , I am 
afraid now to upgrade any other node ...



Le 06/07/2020 à 17:53, Jean Weisbuch a écrit :
You usually have to restart the "prl-disp" service when you have this 
kind of problems and/or to "prlctl unregister" and/or "vzctl unregister".



On 7/6/20 5:07 PM, Jehan Procaccia IMT wrote:

Hello

because I have a fail CT on a hardware node , cf 
https://lists.openvz.org/pipermail/users/2020-July/007928.html


I moved manually the CT files (hdds, conf, etc ...) to another 
hardware node (HW) that doesn't seem to have the pb yet ( not 
updateded to OpenVZ release 7.0.14 (136))


I did a rsync of /vz/private/CTID from HWsrc to HWdest  and created 
in /etc/vz/conf on HWdest the associated link to the conf file


ln -s /vz/private/144dc737-b4e3-4c03-852c-25a6df06cee4/ve.conf 
144dc737-b4e3-4c03-852c-25a6df06cee4.conf


but still , a prctl list --all doesn't show that newly "moved" CT .

How can I tell HWdest that there is a new CT on it ? I tried a 
register command


# prlctl register /vz/private/144dc737-b4e3-4c03-852c-25a6df06cee4
Register the virtual environment...
Failed to register the virtual environment: 
PRL_ERR_VZCTL_OPERATION_FAILED (Details: Container is already 
registered with id 144dc737-b4e3-4c03-852c-25a6df06cee4
Container registration failed: Container is already registered with 
id 144dc737-b4e3-4c03-852c-25a6df06cee4

)

but it is already registered .

is there a way to "reload/refresh" something to enable the run of 
that moved CT ?


Thanks .

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] reload or refresh list of CT after move an already registered CT

2020-07-06 Thread Jehan Procaccia IMT

Hello

because I have a fail CT on a hardware node , cf 
https://lists.openvz.org/pipermail/users/2020-July/007928.html


I moved manually the CT files (hdds, conf, etc ...) to another hardware 
node (HW) that doesn't seem to have the pb yet ( not updateded to OpenVZ 
release 7.0.14 (136))


I did a rsync of /vz/private/CTID from HWsrc to HWdest  and created in 
/etc/vz/conf on HWdest the associated link to the conf file


ln -s /vz/private/144dc737-b4e3-4c03-852c-25a6df06cee4/ve.conf 
144dc737-b4e3-4c03-852c-25a6df06cee4.conf


but still , a prctl list --all doesn't show that newly "moved" CT .

How can I tell HWdest that there is a new CT on it ? I tried a register 
command


# prlctl register /vz/private/144dc737-b4e3-4c03-852c-25a6df06cee4
Register the virtual environment...
Failed to register the virtual environment: 
PRL_ERR_VZCTL_OPERATION_FAILED (Details: Container is already registered 
with id 144dc737-b4e3-4c03-852c-25a6df06cee4
Container registration failed: Container is already registered with id 
144dc737-b4e3-4c03-852c-25a6df06cee4

)

but it is already registered .

is there a way to "reload/refresh" something to enable the run of that 
moved CT ?


Thanks .




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-06 Thread Jehan Procaccia IMT

Hello,

I am back to the initial pb related to that post , since I updated to 
/OpenVZ release 7.0.14 (136)  | ///Virtuozzo Linux release 7.8.0 (609)// 
, I am also facing CT corrupted status .


I don't see the exact same error as mentioned by Kevin Drysdale below 
(ploop/fsck) , but I am not able to enter certain CT neither can I stop 
them


/[root@olb~]# prlctl stop trans8//
//Stopping the CT...//
//Failed to stop the CT: PRL_ERR_VZCTL_OPERATION_FAILED (Details: Cannot 
lock the Container//

//)//
/

/[root@olb ~]# prlctl enter trans8//
//Unable to get init pid//
//enter into CT failed//
//
//exited from CT 02faecdd-ddb6-42eb-8103-202508f18256/

For those CTs that fail to enter or stop, I noticed that there is a 2nd 
device mounted with name ending in /dump/Dump/.criu.cgyard.4EJB8c//

/

/[root@olb ~]# df -H |grep 02faecdd-ddb6-42eb-8103-202508f18256//
///dev/ploop53152p1  11G    2,2G  7,7G  23% 
/vz/root/02faecdd-ddb6-42eb-8103-202508f18256//
//none  537M   0  537M   0% 
/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/dump/Dump/.criu.cgyard.4EJB8c/



//[root@olb ~]# prlctl list | grep 02faecdd-ddb6-42eb-8103-202508f18256//
//{02faecdd-ddb6-42eb-8103-202508f18256}  running 157.159.196.17  CT 
isptrans8//

//

I rebooted the whole hardware node, and since reboot here is the related 
vzctl.log


/2020-07-06T15:10:38+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Removing the stale lock file 
/vz/lock/02faecdd-ddb6-42eb-8103-202508f18256.lck//
//2020-07-06T15:10:38+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Restoring the Container ...//
//2020-07-06T15:10:38+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Mount image: 
/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd //
//2020-07-06T15:10:38+0200 : Opening delta 
/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd/root.hds//
//2020-07-06T15:10:38+0200 : Opening delta 
/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd/root.hds//
//2020-07-06T15:10:38+0200 : Opening delta 
/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd/root.hds//
//2020-07-06T15:10:38+0200 : Adding delta dev=/dev/ploop53152 
img=/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd/root.hds 
(rw)//
//2020-07-06T15:10:39+0200 : Mounted /dev/ploop53152p1 at 
/vz/root/02faecdd-ddb6-42eb-8103-202508f18256 fstype=ext4 
data=',balloon_ino=12' //
//2020-07-06T15:10:39+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Container is mounted//
//2020-07-06T15:10:40+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Setting permissions for 
image=/vz/private/02faecdd-ddb6-42eb-8103-202508f18256/root.hdd//
//2020-07-06T15:10:40+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Configure memguarantee: 0%//
//2020-07-06T15:18:12+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Unable to get init pid//
//2020-07-06T15:18:12+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : enter into CT failed//
//2020-07-06T15:19:49+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Cannot lock the Container//
//2020-07-06T15:25:33+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : Unable to get init pid//
//2020-07-06T15:25:33+0200 vzctl : CT 
02faecdd-ddb6-42eb-8103-202508f18256 : enter into CT failed/


on another CT failing to enter / stop same kind of logs  + /Error (criu /:

/2020-07-06T15:10:38+0200 vzctl : CT 
4ae48335-5b63-475d-8629-c8d742cb0ba0 : Restoring the Container ...//
//2020-07-06T15:10:38+0200 vzctl : CT 
4ae48335-5b63-475d-8629-c8d742cb0ba0 : Mount image: 
/vz/private/4ae48335-5b63-475d-8629-c8d742cb0ba0/root.hdd //
//2020-07-06T15:10:38+0200 : Opening delta 
/vz/private/4ae48335-5b63-475d-8629-c8d742cb0ba0/root.hdd/root.hds//
//2020-07-06T15:10:39+0200 : Opening delta 
/vz/private/4ae48335-5b63-475d-8629-c8d742cb0ba0/root.hdd/root.hds//
//2020-07-06T15:10:39+0200 : Opening delta 
/vz/private/4ae48335-5b63-475d-8629-c8d742cb0ba0/root.hdd/root.hds//
//2020-07-06T15:10:39+0200 : Adding delta dev=/dev/ploop36049 
img=/vz/private/4ae48335-5b63-475d-8629-c8d742cb0ba0/root.hdd/root.hds 
(rw)//
//2020-07-06T15:10:41+0200 : Mounted /dev/ploop36049p1 at 
/vz/root/4ae48335-5b63-475d-8629-c8d742cb0ba0 fstype=ext4 
data=',balloon_ino=12' //
//2020-07-06T15:10:41+0200 vzctl : CT 
4ae48335-5b63-475d-8629-c8d742cb0ba0 : Container is mounted//
//2020-07-06T15:10:41+0200 vzctl : CT 
4ae48335-5b63-475d-8629-c8d742cb0ba0 : Setting permissions for 
image=/vz/private/4ae48335-5b63-475d-8629-c8d742cb0ba0/root.hdd//
//2020-07-06T15:10:41+0200 vzctl : CT 
4ae48335-5b63-475d-8629-c8d742cb0ba0 : Configure memguarantee: 0%//
//2020-07-06T15:10:57+0200 vzeventd : Run: /etc/vz/vzevent.d/ve-stop 
id=4ae48335-5b63-475d-8629-c8d742cb0ba0//
//2020-07-06T15:10:57+0200 vzctl : CT 
4ae48335-5b63-475d-8629-c8d742cb0ba0 : (03.038774) Error 
(criu/util.c:666): exited, status=4//
//2020-07-06T15:10:57+0200 vzctl : CT 
4ae48335-5b63-475d-8629-c8d742cb0ba0 : (14.446513)  1: Error 

Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread Jehan Procaccia IMT

"updating to 7.0.14 (136)" !?

I did an update yesterday , I am far behind that version

/# cat /etc/vzlinux-release//
/
/Virtuozzo Linux release 7.8.0 (609)/
/
/
/# uname -a //
//Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9 12:58:54 
MSK 2020 x86_64 x86_64 x86_64 GNU/Linux//

/
why don't you try to update to latest version ?


Le 29/06/2020 à 12:30, Kevin Drysdale a écrit :

Hello,

After updating one of our OpenVZ VPS hosting nodes at the end of last 
week, we've started to have issues with corruption apparently 
occurring inside containers.  Issues of this nature have never 
affected the node previously, and there do not appear to be any 
hardware issues that could explain this.


Specifically, a few hours after updating, we began to see containers 
experiencing errors such as this in the logs:


[90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25
[90471.679022] EXT4-fs (ploop35454p1): initial error at time 
1593205255: ext4_ext_find_extent:904: inode 136399
[90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922: 
ext4_ext_find_extent:904: inode 136399

[95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67
[95189.954582] EXT4-fs (ploop42983p1): initial error at time 
1593210174: htree_dirblock_to_tree:918: inode 926441: block 3683060
[95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902: 
ext4_iget:4435: inode 1849777

[95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42
[95714.207447] EXT4-fs (ploop60706p1): initial error at time 
1593210489: ext4_ext_find_extent:904: inode 136272
[95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063: 
ext4_ext_find_extent:904: inode 136272


Shutting the containers down and manually mounting and e2fsck'ing 
their filesystems did clear these errors, but each of the containers 
(which were mostly used for running Plesk) had widespread issues with 
corrupt or missing files after the fsck's completed, necessitating 
their being restored from backup.


Concurrently, we also began to see messages like this appearing in 
/var/log/vzctl.log, which again have never appeared at any point prior 
to this update being installed:


/var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole 
(check.c:240): Warning: ploop image 
'/vz/private/8288448/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole 
(check.c:240): Warning: ploop image 
'/vz/private/8288450/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole 
(check.c:240): Warning: ploop image 
'/vz/private/8288451/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole 
(check.c:240): Warning: ploop image 
'/vz/private/8288452/root.hdd/root.hds' is sparse


The basic procedure we follow when updating our nodes is as follows:

1, Update the standby node we keep spare for this process
2. vzmigrate all containers from the live node being updated to the 
standby node

3. Update the live node
4. Reboot the live node
5. vzmigrate the containers from the standby node back to the live 
node they originally came from


So the only tool which has been used to affect these containers is 
'vzmigrate' itself, so I'm at something of a loss as to how to explain 
the root.hdd images for these containers containing sparse gaps.  This 
is something we have never done, as we have always been aware that 
OpenVZ does not support their use inside a container's hard drive 
image.  And the fact that these images have suddenly become sparse at 
the same time they have started to exhibit filesystem corruption is 
somewhat concerning.


We can restore all affected containers from backups, but I wanted to 
get in touch with the list to see if anyone else at any other site has 
experienced these or similar issues after applying the 7.0.14 (136) 
update.


Thank you,
Kevin Drysdale.




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users