[Users] Registration

2020-07-02 Thread Ross Ryder
Hi,

How can I register to post here :

https://forum.openvz.org/index.php
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread Jehan PROCACCIA
yes , you are right, I do get the same virtuozzo-release as mentioned in the 
initial subject, sorry for the noise . 

# cat /etc/virtuozzo-release 
OpenVZ release 7.0.14 (136) 

but anyway, I don't see any ploop / fsck error in the host /var/log/vzctl.log 
inside the CT , where did you see those errors ? 

Jehan . 


De: "jjs - mainphrame"  
À: "OpenVZ users"  
Envoyé: Jeudi 2 Juillet 2020 19:33:23 
Objet: Re: [Users] Issues after updating to 7.0.14 (136) 

Thanks for that sanity check, the conundrum is resolved. vzlinux-release and 
virtuozzo-release are indeed different things. 
Jake 

On Thu, Jul 2, 2020 at 10:27 AM Jonathan Wright < [ 
mailto:jonat...@knownhost.com | jonat...@knownhost.com ] > wrote: 





/etc/redhat-release and /etc/virtuozzo-release are two different things. 
On 7/2/20 12:16 PM, jjs - mainphrame wrote: 

BQ_BEGIN

Jehan - 

I get the same output here - 

[root@annie ~]# yum repolist |grep virt 
virtuozzolinux-base VirtuozzoLinux Base 15,415+189 
virtuozzolinux-updates VirtuozzoLinux Updates 0 

I'm baffled as to how you're on 7.8.0 while I'm at 7.0,15 even though I'm fully 
up to date. 

# uname -a 
Linux [ http://annie.ufcfan.org/ | annie.ufcfan.org ] 
3.10.0-1127.8.2.vz7.151.10 #1 SMP Mon Jun 1 19:05:52 MSK 2020 x86_64 x86_64 
x86_64 GNU/Linux 

Jake 

On Thu, Jul 2, 2020 at 10:08 AM Jehan PROCACCIA < [ 
mailto:jehan.procac...@imtbs-tsp.eu | jehan.procac...@imtbs-tsp.eu ] > wrote: 

BQ_BEGIN

no factory , just repos virtuozzolinux-base and openvz-os 

# yum repolist |grep virt 
virtuozzolinux-base VirtuozzoLinux Base 15 415+189 
virtuozzolinux-updates VirtuozzoLinux Updates 0 

Jehan . 


De: "jjs - mainphrame" < [ mailto:j...@mainphrame.com | j...@mainphrame.com ] > 
À: "OpenVZ users" < [ mailto:users@openvz.org | users@openvz.org ] > 
Cc: "Kevin Drysdale" < [ mailto:kevin.drysd...@iomart.com | 
kevin.drysd...@iomart.com ] > 
Envoyé: Jeudi 2 Juillet 2020 18:22:33 
Objet: Re: [Users] Issues after updating to 7.0.14 (136) 

Jehan, are you running factory? 

My ovz hosts are up to date, and I see: 

[root@annie ~]# cat /etc/virtuozzo-release 
OpenVZ release 7.0.15 (222) 

Jake 


On Thu, Jul 2, 2020 at 9:08 AM Jehan Procaccia IMT < [ 
mailto:jehan.procac...@imtbs-tsp.eu | jehan.procac...@imtbs-tsp.eu ] > wrote: 

BQ_BEGIN

"updating to 7.0.14 (136)" !? 

I did an update yesterday , I am far behind that version 

# cat /etc/vzlinux-release 
Virtuozzo Linux release 7.8.0 (609) 

# uname -a 
Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9 12:58:54 MSK 2020 
x86_64 x86_64 x86_64 GNU/Linux 

why don't you try to update to latest version ? 


Le 29/06/2020 à 12:30, Kevin Drysdale a écrit : 

BQ_BEGIN
Hello, 

After updating one of our OpenVZ VPS hosting nodes at the end of last week, 
we've started to have issues with corruption apparently occurring inside 
containers. Issues of this nature have never affected the node previously, and 
there do not appear to be any hardware issues that could explain this. 

Specifically, a few hours after updating, we began to see containers 
experiencing errors such as this in the logs: 

[90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25 
[90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255: 
ext4_ext_find_extent:904: inode 136399 
[90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922: 
ext4_ext_find_extent:904: inode 136399 
[95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67 
[95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174: 
htree_dirblock_to_tree:918: inode 926441: block 3683060 
[95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902: 
ext4_iget:4435: inode 1849777 
[95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42 
[95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489: 
ext4_ext_find_extent:904: inode 136272 
[95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063: 
ext4_ext_find_extent:904: inode 136272 

Shutting the containers down and manually mounting and e2fsck'ing their 
filesystems did clear these errors, but each of the containers (which were 
mostly used for running Plesk) had widespread issues with corrupt or missing 
files after the fsck's completed, necessitating their being restored from 
backup. 

Concurrently, we also began to see messages like this appearing in 
/var/log/vzctl.log, which again have never appeared at any point prior to this 
update being installed: 

/var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288448/root.hdd/root.hds' is sparse 
/var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288450/root.hdd/root.hds' is sparse 
/var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288451/root.hdd/root.hds' is sparse 
/var/log/vzctl.log:2020-06-26T21:19:57+0100 

Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread jjs - mainphrame
Thanks for that sanity check, the conundrum is resolved. vzlinux-release
and virtuozzo-release are indeed different things.

Jake

On Thu, Jul 2, 2020 at 10:27 AM Jonathan Wright 
wrote:

> /etc/redhat-release and /etc/virtuozzo-release are two different things.
> On 7/2/20 12:16 PM, jjs - mainphrame wrote:
>
> Jehan -
>
> I get the same output here -
>
> [root@annie ~]# yum repolist  |grep virt
> virtuozzolinux-baseVirtuozzoLinux Base
>  15,415+189
> virtuozzolinux-updates VirtuozzoLinux Updates
>  0
>
> I'm baffled as to how you're on 7.8.0 while I'm at 7.0,15 even though I'm
> fully up to date.
>
> # uname -a
> Linux annie.ufcfan.org 3.10.0-1127.8.2.vz7.151.10 #1 SMP Mon Jun 1
> 19:05:52 MSK 2020 x86_64 x86_64 x86_64 GNU/Linux
>
> Jake
>
> On Thu, Jul 2, 2020 at 10:08 AM Jehan PROCACCIA <
> jehan.procac...@imtbs-tsp.eu> wrote:
>
>> no factory , just repos virtuozzolinux-base and openvz-os
>>
>> # yum repolist  |grep virt
>> virtuozzolinux-baseVirtuozzoLinux Base15
>> 415+189
>> virtuozzolinux-updates VirtuozzoLinux
>> Updates  0
>>
>> Jehan .
>>
>> --
>> *De: *"jjs - mainphrame" 
>> *À: *"OpenVZ users" 
>> *Cc: *"Kevin Drysdale" 
>> *Envoyé: *Jeudi 2 Juillet 2020 18:22:33
>> *Objet: *Re: [Users] Issues after updating to 7.0.14 (136)
>>
>> Jehan, are you running factory?
>>
>> My ovz hosts are up to date, and I see:
>>
>> [root@annie ~]# cat /etc/virtuozzo-release
>> OpenVZ release 7.0.15 (222)
>>
>> Jake
>>
>>
>> On Thu, Jul 2, 2020 at 9:08 AM Jehan Procaccia IMT <
>> jehan.procac...@imtbs-tsp.eu> wrote:
>>
>>> "updating to 7.0.14 (136)" !?
>>>
>>> I did an update yesterday , I am far behind that version
>>>
>>> *# cat /etc/vzlinux-release*
>>> *Virtuozzo Linux release 7.8.0 (609)*
>>>
>>> *# uname -a *
>>> *Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9 12:58:54
>>> MSK 2020 x86_64 x86_64 x86_64 GNU/Linux*
>>>
>>> why don't you try to update to latest version ?
>>>
>>>
>>> Le 29/06/2020 à 12:30, Kevin Drysdale a écrit :
>>>
>>> Hello,
>>>
>>> After updating one of our OpenVZ VPS hosting nodes at the end of last
>>> week, we've started to have issues with corruption apparently occurring
>>> inside containers.  Issues of this nature have never affected the node
>>> previously, and there do not appear to be any hardware issues that could
>>> explain this.
>>>
>>> Specifically, a few hours after updating, we began to see containers
>>> experiencing errors such as this in the logs:
>>>
>>> [90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25
>>> [90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255:
>>> ext4_ext_find_extent:904: inode 136399
>>> [90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922:
>>> ext4_ext_find_extent:904: inode 136399
>>> [95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67
>>> [95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174:
>>> htree_dirblock_to_tree:918: inode 926441: block 3683060
>>> [95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902:
>>> ext4_iget:4435: inode 1849777
>>> [95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42
>>> [95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489:
>>> ext4_ext_find_extent:904: inode 136272
>>> [95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063:
>>> ext4_ext_find_extent:904: inode 136272
>>>
>>> Shutting the containers down and manually mounting and e2fsck'ing their
>>> filesystems did clear these errors, but each of the containers (which were
>>> mostly used for running Plesk) had widespread issues with corrupt or
>>> missing files after the fsck's completed, necessitating their being
>>> restored from backup.
>>>
>>> Concurrently, we also began to see messages like this appearing in
>>> /var/log/vzctl.log, which again have never appeared at any point prior to
>>> this update being installed:
>>>
>>> /var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole
>>> (check.c:240): Warning: ploop image '/vz/private/8288448/root.hdd/root.hds'
>>> is sparse
>>> /var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole
>>> (check.c:240): Warning: ploop image '/vz/private/8288450/root.hdd/root.hds'
>>> is sparse
>>> /var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole
>>> (check.c:240): Warning: ploop image '/vz/private/8288451/root.hdd/root.hds'
>>> is sparse
>>> /var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole
>>> (check.c:240): Warning: ploop image '/vz/private/8288452/root.hdd/root.hds'
>>> is sparse
>>>
>>> The basic procedure we follow when updating our nodes is as follows:
>>>
>>> 1, Update the standby node we keep spare for this process
>>> 2. vzmigrate all containers from the live node being updated to the
>>> standby node
>>> 3. Update the live node
>>> 4. Reboot the live node
>>> 5. vzmigrate the containers from the standby node 

Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread Jonathan Wright

/etc/redhat-release and /etc/virtuozzo-release are two different things.

On 7/2/20 12:16 PM, jjs - mainphrame wrote:

Jehan -

I get the same output here -

[root@annie ~]# yum repolist  |grep virt
virtuozzolinux-base    VirtuozzoLinux Base      15,415+189
virtuozzolinux-updates VirtuozzoLinux Updates                0

I'm baffled as to how you're on 7.8.0 while I'm at 7.0,15 even though 
I'm fully up to date.


# uname -a
Linux annie.ufcfan.org  
3.10.0-1127.8.2.vz7.151.10 #1 SMP Mon Jun 1 19:05:52 MSK 2020 x86_64 
x86_64 x86_64 GNU/Linux


Jake

On Thu, Jul 2, 2020 at 10:08 AM Jehan PROCACCIA 
mailto:jehan.procac...@imtbs-tsp.eu>> 
wrote:


no factory , just repos virtuozzolinux-base and openvz-os

# yum repolist  |grep virt
virtuozzolinux-base    VirtuozzoLinux
Base    15 415+189
virtuozzolinux-updates VirtuozzoLinux
Updates  0

Jehan .


*De: *"jjs - mainphrame" mailto:j...@mainphrame.com>>
*À: *"OpenVZ users" mailto:users@openvz.org>>
*Cc: *"Kevin Drysdale" mailto:kevin.drysd...@iomart.com>>
*Envoyé: *Jeudi 2 Juillet 2020 18:22:33
*Objet: *Re: [Users] Issues after updating to 7.0.14 (136)

Jehan, are you running factory?

My ovz hosts are up to date, and I see:

[root@annie ~]# cat /etc/virtuozzo-release
OpenVZ release 7.0.15 (222)

Jake


On Thu, Jul 2, 2020 at 9:08 AM Jehan Procaccia IMT
mailto:jehan.procac...@imtbs-tsp.eu>> wrote:

"updating to 7.0.14 (136)" !?

I did an update yesterday , I am far behind that version

/# cat /etc/vzlinux-release//
/
/Virtuozzo Linux release 7.8.0 (609)/
/
/
/# uname -a //
//Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9
12:58:54 MSK 2020 x86_64 x86_64 x86_64 GNU/Linux//
/
why don't you try to update to latest version ?


Le 29/06/2020 à 12:30, Kevin Drysdale a écrit :

Hello,

After updating one of our OpenVZ VPS hosting nodes at the
end of last week, we've started to have issues with
corruption apparently occurring inside containers.  Issues
of this nature have never affected the node previously,
and there do not appear to be any hardware issues that
could explain this.

Specifically, a few hours after updating, we began to see
containers experiencing errors such as this in the logs:

[90471.678994] EXT4-fs (ploop35454p1): error count since
last fsck: 25
[90471.679022] EXT4-fs (ploop35454p1): initial error at
time 1593205255: ext4_ext_find_extent:904: inode 136399
[90471.679030] EXT4-fs (ploop35454p1): last error at time
1593232922: ext4_ext_find_extent:904: inode 136399
[95189.954569] EXT4-fs (ploop42983p1): error count since
last fsck: 67
[95189.954582] EXT4-fs (ploop42983p1): initial error at
time 1593210174: htree_dirblock_to_tree:918: inode 926441:
block 3683060
[95189.954589] EXT4-fs (ploop42983p1): last error at time
1593276902: ext4_iget:4435: inode 1849777
[95714.207432] EXT4-fs (ploop60706p1): error count since
last fsck: 42
[95714.207447] EXT4-fs (ploop60706p1): initial error at
time 1593210489: ext4_ext_find_extent:904: inode 136272
[95714.207452] EXT4-fs (ploop60706p1): last error at time
1593231063: ext4_ext_find_extent:904: inode 136272

Shutting the containers down and manually mounting and
e2fsck'ing their filesystems did clear these errors, but
each of the containers (which were mostly used for running
Plesk) had widespread issues with corrupt or missing files
after the fsck's completed, necessitating their being
restored from backup.

Concurrently, we also began to see messages like this
appearing in /var/log/vzctl.log, which again have never
appeared at any point prior to this update being installed:

/var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in
fill_hole (check.c:240): Warning: ploop image
'/vz/private/8288448/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in
fill_hole (check.c:240): Warning: ploop image
'/vz/private/8288450/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in
fill_hole (check.c:240): Warning: ploop image
'/vz/private/8288451/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in
fill_hole (check.c:240): Warning: 

Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread jjs - mainphrame
Jehan -

I get the same output here -

[root@annie ~]# yum repolist  |grep virt
virtuozzolinux-baseVirtuozzoLinux Base
 15,415+189
virtuozzolinux-updates VirtuozzoLinux Updates
   0

I'm baffled as to how you're on 7.8.0 while I'm at 7.0,15 even though I'm
fully up to date.

# uname -a
Linux annie.ufcfan.org 3.10.0-1127.8.2.vz7.151.10 #1 SMP Mon Jun 1 19:05:52
MSK 2020 x86_64 x86_64 x86_64 GNU/Linux

Jake

On Thu, Jul 2, 2020 at 10:08 AM Jehan PROCACCIA <
jehan.procac...@imtbs-tsp.eu> wrote:

> no factory , just repos virtuozzolinux-base and openvz-os
>
> # yum repolist  |grep virt
> virtuozzolinux-baseVirtuozzoLinux Base15
> 415+189
> virtuozzolinux-updates VirtuozzoLinux
> Updates  0
>
> Jehan .
>
> --
> *De: *"jjs - mainphrame" 
> *À: *"OpenVZ users" 
> *Cc: *"Kevin Drysdale" 
> *Envoyé: *Jeudi 2 Juillet 2020 18:22:33
> *Objet: *Re: [Users] Issues after updating to 7.0.14 (136)
>
> Jehan, are you running factory?
>
> My ovz hosts are up to date, and I see:
>
> [root@annie ~]# cat /etc/virtuozzo-release
> OpenVZ release 7.0.15 (222)
>
> Jake
>
>
> On Thu, Jul 2, 2020 at 9:08 AM Jehan Procaccia IMT <
> jehan.procac...@imtbs-tsp.eu> wrote:
>
>> "updating to 7.0.14 (136)" !?
>>
>> I did an update yesterday , I am far behind that version
>>
>> *# cat /etc/vzlinux-release*
>> *Virtuozzo Linux release 7.8.0 (609)*
>>
>> *# uname -a *
>> *Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9 12:58:54 MSK
>> 2020 x86_64 x86_64 x86_64 GNU/Linux*
>>
>> why don't you try to update to latest version ?
>>
>>
>> Le 29/06/2020 à 12:30, Kevin Drysdale a écrit :
>>
>> Hello,
>>
>> After updating one of our OpenVZ VPS hosting nodes at the end of last
>> week, we've started to have issues with corruption apparently occurring
>> inside containers.  Issues of this nature have never affected the node
>> previously, and there do not appear to be any hardware issues that could
>> explain this.
>>
>> Specifically, a few hours after updating, we began to see containers
>> experiencing errors such as this in the logs:
>>
>> [90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25
>> [90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255:
>> ext4_ext_find_extent:904: inode 136399
>> [90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922:
>> ext4_ext_find_extent:904: inode 136399
>> [95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67
>> [95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174:
>> htree_dirblock_to_tree:918: inode 926441: block 3683060
>> [95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902:
>> ext4_iget:4435: inode 1849777
>> [95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42
>> [95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489:
>> ext4_ext_find_extent:904: inode 136272
>> [95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063:
>> ext4_ext_find_extent:904: inode 136272
>>
>> Shutting the containers down and manually mounting and e2fsck'ing their
>> filesystems did clear these errors, but each of the containers (which were
>> mostly used for running Plesk) had widespread issues with corrupt or
>> missing files after the fsck's completed, necessitating their being
>> restored from backup.
>>
>> Concurrently, we also began to see messages like this appearing in
>> /var/log/vzctl.log, which again have never appeared at any point prior to
>> this update being installed:
>>
>> /var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole
>> (check.c:240): Warning: ploop image '/vz/private/8288448/root.hdd/root.hds'
>> is sparse
>> /var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole
>> (check.c:240): Warning: ploop image '/vz/private/8288450/root.hdd/root.hds'
>> is sparse
>> /var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole
>> (check.c:240): Warning: ploop image '/vz/private/8288451/root.hdd/root.hds'
>> is sparse
>> /var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole
>> (check.c:240): Warning: ploop image '/vz/private/8288452/root.hdd/root.hds'
>> is sparse
>>
>> The basic procedure we follow when updating our nodes is as follows:
>>
>> 1, Update the standby node we keep spare for this process
>> 2. vzmigrate all containers from the live node being updated to the
>> standby node
>> 3. Update the live node
>> 4. Reboot the live node
>> 5. vzmigrate the containers from the standby node back to the live node
>> they originally came from
>>
>> So the only tool which has been used to affect these containers is
>> 'vzmigrate' itself, so I'm at something of a loss as to how to explain the
>> root.hdd images for these containers containing sparse gaps.  This is
>> something we have never done, as we have always been aware that OpenVZ does
>> not support their use inside a container's hard drive image.  And the fact
>> that these images 

Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread Jehan PROCACCIA
no factory , just repos virtuozzolinux-base and openvz-os 

# yum repolist |grep virt 
virtuozzolinux-base VirtuozzoLinux Base 15 415+189 
virtuozzolinux-updates VirtuozzoLinux Updates 0 

Jehan . 


De: "jjs - mainphrame"  
À: "OpenVZ users"  
Cc: "Kevin Drysdale"  
Envoyé: Jeudi 2 Juillet 2020 18:22:33 
Objet: Re: [Users] Issues after updating to 7.0.14 (136) 

Jehan, are you running factory? 

My ovz hosts are up to date, and I see: 

[root@annie ~]# cat /etc/virtuozzo-release 
OpenVZ release 7.0.15 (222) 

Jake 


On Thu, Jul 2, 2020 at 9:08 AM Jehan Procaccia IMT < [ 
mailto:jehan.procac...@imtbs-tsp.eu | jehan.procac...@imtbs-tsp.eu ] > wrote: 



"updating to 7.0.14 (136)" !? 

I did an update yesterday , I am far behind that version 

# cat /etc/vzlinux-release 
Virtuozzo Linux release 7.8.0 (609) 

# uname -a 
Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9 12:58:54 MSK 2020 
x86_64 x86_64 x86_64 GNU/Linux 

why don't you try to update to latest version ? 


Le 29/06/2020 à 12:30, Kevin Drysdale a écrit : 

BQ_BEGIN
Hello, 

After updating one of our OpenVZ VPS hosting nodes at the end of last week, 
we've started to have issues with corruption apparently occurring inside 
containers. Issues of this nature have never affected the node previously, and 
there do not appear to be any hardware issues that could explain this. 

Specifically, a few hours after updating, we began to see containers 
experiencing errors such as this in the logs: 

[90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25 
[90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255: 
ext4_ext_find_extent:904: inode 136399 
[90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922: 
ext4_ext_find_extent:904: inode 136399 
[95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67 
[95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174: 
htree_dirblock_to_tree:918: inode 926441: block 3683060 
[95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902: 
ext4_iget:4435: inode 1849777 
[95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42 
[95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489: 
ext4_ext_find_extent:904: inode 136272 
[95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063: 
ext4_ext_find_extent:904: inode 136272 

Shutting the containers down and manually mounting and e2fsck'ing their 
filesystems did clear these errors, but each of the containers (which were 
mostly used for running Plesk) had widespread issues with corrupt or missing 
files after the fsck's completed, necessitating their being restored from 
backup. 

Concurrently, we also began to see messages like this appearing in 
/var/log/vzctl.log, which again have never appeared at any point prior to this 
update being installed: 

/var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288448/root.hdd/root.hds' is sparse 
/var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288450/root.hdd/root.hds' is sparse 
/var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288451/root.hdd/root.hds' is sparse 
/var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288452/root.hdd/root.hds' is sparse 

The basic procedure we follow when updating our nodes is as follows: 

1, Update the standby node we keep spare for this process 
2. vzmigrate all containers from the live node being updated to the standby 
node 
3. Update the live node 
4. Reboot the live node 
5. vzmigrate the containers from the standby node back to the live node they 
originally came from 

So the only tool which has been used to affect these containers is 'vzmigrate' 
itself, so I'm at something of a loss as to how to explain the root.hdd images 
for these containers containing sparse gaps. This is something we have never 
done, as we have always been aware that OpenVZ does not support their use 
inside a container's hard drive image. And the fact that these images have 
suddenly become sparse at the same time they have started to exhibit filesystem 
corruption is somewhat concerning. 

We can restore all affected containers from backups, but I wanted to get in 
touch with the list to see if anyone else at any other site has experienced 
these or similar issues after applying the 7.0.14 (136) update. 

Thank you, 
Kevin Drysdale. 




___ 
Users mailing list 
[ mailto:Users@openvz.org | Users@openvz.org ] 
[ https://lists.openvz.org/mailman/listinfo/users | 
https://lists.openvz.org/mailman/listinfo/users ] 





___ 
Users mailing list 
[ mailto:Users@openvz.org | Users@openvz.org ] 
[ 

Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread Konstantin Bukharov
Hello Kevin,

What was the OpenVz version *before* update to 7.0.14-136?

Sparse files for CTs are here for at least two years.

Best regards,
Konstantin

-Original Message-
From: users-boun...@openvz.org  On Behalf Of Kevin 
Drysdale
Sent: Monday, June 29, 2020 1:30 PM
To: users@openvz.org
Subject: [Users] Issues after updating to 7.0.14 (136)

Hello,

After updating one of our OpenVZ VPS hosting nodes at the end of last week, 
we've started to have issues with corruption apparently occurring inside 
containers.  Issues of this nature have never affected the node previously, and 
there do not appear to be any hardware issues that could explain this.

Specifically, a few hours after updating, we began to see containers 
experiencing errors such as this in the logs:

[90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25 
[90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255: 
ext4_ext_find_extent:904: inode 136399 [90471.679030] EXT4-fs (ploop35454p1): 
last error at time 1593232922: ext4_ext_find_extent:904: inode 136399 
[95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67 
[95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174: 
htree_dirblock_to_tree:918: inode 926441: block 3683060 [95189.954589] EXT4-fs 
(ploop42983p1): last error at time 1593276902: ext4_iget:4435: inode 1849777 
[95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42 
[95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489: 
ext4_ext_find_extent:904: inode 136272 [95714.207452] EXT4-fs (ploop60706p1): 
last error at time 1593231063: ext4_ext_find_extent:904: inode 136272

Shutting the containers down and manually mounting and e2fsck'ing their 
filesystems did clear these errors, but each of the containers (which were 
mostly used for running Plesk) had widespread issues with corrupt or missing 
files after the fsck's completed, necessitating their being restored from 
backup.

Concurrently, we also began to see messages like this appearing in 
/var/log/vzctl.log, which again have never appeared at any point prior to this 
update being installed:

/var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288448/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288450/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288451/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288452/root.hdd/root.hds' is sparse

The basic procedure we follow when updating our nodes is as follows:

1, Update the standby node we keep spare for this process 2. vzmigrate all 
containers from the live node being updated to the standby node 3. Update the 
live node 4. Reboot the live node 5. vzmigrate the containers from the standby 
node back to the live node they originally came from

So the only tool which has been used to affect these containers is 'vzmigrate' 
itself, so I'm at something of a loss as to how to explain the root.hdd images 
for these containers containing sparse gaps.  This is something we have never 
done, as we have always been aware that OpenVZ does not support their use 
inside a container's hard drive image.  And the fact that these images have 
suddenly become sparse at the same time they have started to exhibit filesystem 
corruption is somewhat concerning.

We can restore all affected containers from backups, but I wanted to get in 
touch with the list to see if anyone else at any other site has experienced 
these or similar issues after applying the 7.0.14 (136) update.

Thank you,
Kevin Drysdale.




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread jjs - mainphrame
Jehan, are you running factory?

My ovz hosts are up to date, and I see:

[root@annie ~]# cat /etc/virtuozzo-release
OpenVZ release 7.0.15 (222)

Jake


On Thu, Jul 2, 2020 at 9:08 AM Jehan Procaccia IMT <
jehan.procac...@imtbs-tsp.eu> wrote:

> "updating to 7.0.14 (136)" !?
>
> I did an update yesterday , I am far behind that version
>
> *# cat /etc/vzlinux-release*
> *Virtuozzo Linux release 7.8.0 (609)*
>
> *# uname -a *
> *Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9 12:58:54 MSK
> 2020 x86_64 x86_64 x86_64 GNU/Linux*
>
> why don't you try to update to latest version ?
>
>
> Le 29/06/2020 à 12:30, Kevin Drysdale a écrit :
>
> Hello,
>
> After updating one of our OpenVZ VPS hosting nodes at the end of last
> week, we've started to have issues with corruption apparently occurring
> inside containers.  Issues of this nature have never affected the node
> previously, and there do not appear to be any hardware issues that could
> explain this.
>
> Specifically, a few hours after updating, we began to see containers
> experiencing errors such as this in the logs:
>
> [90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25
> [90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255:
> ext4_ext_find_extent:904: inode 136399
> [90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922:
> ext4_ext_find_extent:904: inode 136399
> [95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67
> [95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174:
> htree_dirblock_to_tree:918: inode 926441: block 3683060
> [95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902:
> ext4_iget:4435: inode 1849777
> [95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42
> [95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489:
> ext4_ext_find_extent:904: inode 136272
> [95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063:
> ext4_ext_find_extent:904: inode 136272
>
> Shutting the containers down and manually mounting and e2fsck'ing their
> filesystems did clear these errors, but each of the containers (which were
> mostly used for running Plesk) had widespread issues with corrupt or
> missing files after the fsck's completed, necessitating their being
> restored from backup.
>
> Concurrently, we also began to see messages like this appearing in
> /var/log/vzctl.log, which again have never appeared at any point prior to
> this update being installed:
>
> /var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole
> (check.c:240): Warning: ploop image '/vz/private/8288448/root.hdd/root.hds'
> is sparse
> /var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole
> (check.c:240): Warning: ploop image '/vz/private/8288450/root.hdd/root.hds'
> is sparse
> /var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole
> (check.c:240): Warning: ploop image '/vz/private/8288451/root.hdd/root.hds'
> is sparse
> /var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole
> (check.c:240): Warning: ploop image '/vz/private/8288452/root.hdd/root.hds'
> is sparse
>
> The basic procedure we follow when updating our nodes is as follows:
>
> 1, Update the standby node we keep spare for this process
> 2. vzmigrate all containers from the live node being updated to the
> standby node
> 3. Update the live node
> 4. Reboot the live node
> 5. vzmigrate the containers from the standby node back to the live node
> they originally came from
>
> So the only tool which has been used to affect these containers is
> 'vzmigrate' itself, so I'm at something of a loss as to how to explain the
> root.hdd images for these containers containing sparse gaps.  This is
> something we have never done, as we have always been aware that OpenVZ does
> not support their use inside a container's hard drive image.  And the fact
> that these images have suddenly become sparse at the same time they have
> started to exhibit filesystem corruption is somewhat concerning.
>
> We can restore all affected containers from backups, but I wanted to get
> in touch with the list to see if anyone else at any other site has
> experienced these or similar issues after applying the 7.0.14 (136) update.
>
> Thank you,
> Kevin Drysdale.
>
>
>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread Jehan Procaccia IMT

"updating to 7.0.14 (136)" !?

I did an update yesterday , I am far behind that version

/# cat /etc/vzlinux-release//
/
/Virtuozzo Linux release 7.8.0 (609)/
/
/
/# uname -a //
//Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9 12:58:54 
MSK 2020 x86_64 x86_64 x86_64 GNU/Linux//

/
why don't you try to update to latest version ?


Le 29/06/2020 à 12:30, Kevin Drysdale a écrit :

Hello,

After updating one of our OpenVZ VPS hosting nodes at the end of last 
week, we've started to have issues with corruption apparently 
occurring inside containers.  Issues of this nature have never 
affected the node previously, and there do not appear to be any 
hardware issues that could explain this.


Specifically, a few hours after updating, we began to see containers 
experiencing errors such as this in the logs:


[90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25
[90471.679022] EXT4-fs (ploop35454p1): initial error at time 
1593205255: ext4_ext_find_extent:904: inode 136399
[90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922: 
ext4_ext_find_extent:904: inode 136399

[95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67
[95189.954582] EXT4-fs (ploop42983p1): initial error at time 
1593210174: htree_dirblock_to_tree:918: inode 926441: block 3683060
[95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902: 
ext4_iget:4435: inode 1849777

[95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42
[95714.207447] EXT4-fs (ploop60706p1): initial error at time 
1593210489: ext4_ext_find_extent:904: inode 136272
[95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063: 
ext4_ext_find_extent:904: inode 136272


Shutting the containers down and manually mounting and e2fsck'ing 
their filesystems did clear these errors, but each of the containers 
(which were mostly used for running Plesk) had widespread issues with 
corrupt or missing files after the fsck's completed, necessitating 
their being restored from backup.


Concurrently, we also began to see messages like this appearing in 
/var/log/vzctl.log, which again have never appeared at any point prior 
to this update being installed:


/var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole 
(check.c:240): Warning: ploop image 
'/vz/private/8288448/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole 
(check.c:240): Warning: ploop image 
'/vz/private/8288450/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole 
(check.c:240): Warning: ploop image 
'/vz/private/8288451/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole 
(check.c:240): Warning: ploop image 
'/vz/private/8288452/root.hdd/root.hds' is sparse


The basic procedure we follow when updating our nodes is as follows:

1, Update the standby node we keep spare for this process
2. vzmigrate all containers from the live node being updated to the 
standby node

3. Update the live node
4. Reboot the live node
5. vzmigrate the containers from the standby node back to the live 
node they originally came from


So the only tool which has been used to affect these containers is 
'vzmigrate' itself, so I'm at something of a loss as to how to explain 
the root.hdd images for these containers containing sparse gaps.  This 
is something we have never done, as we have always been aware that 
OpenVZ does not support their use inside a container's hard drive 
image.  And the fact that these images have suddenly become sparse at 
the same time they have started to exhibit filesystem corruption is 
somewhat concerning.


We can restore all affected containers from backups, but I wanted to 
get in touch with the list to see if anyone else at any other site has 
experienced these or similar issues after applying the 7.0.14 (136) 
update.


Thank you,
Kevin Drysdale.




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users