Re: [Users] virtuozzo base OS and new centos 8 orientations

2021-03-29 Thread Kevin Drysdale
Hello,

Yes, it was just the stock ulimit default value of 1024 for open files before I 
changed it.  I increased that by a factor of 10 to 10240, and that did the 
trick.  I doubt I needed to bump it up by anywhere near as much, but that 
certainly enabled the conversion to run through OK.

--

Kevin Drysdale
Senior Systems Administrator

iomart

Tel: 0141 931 6400
Fax: 0141 931 6401

https://twitter.com/iomart
https://www.linkedin.com/company/iomart

www.iomart.com
iomart - any cloud.  your way.

**
The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential material.  The company
cannot accept liability for any errors or omissions in the content of this
message, which may arise as a result of email transmission.  Statements
and opinions expressed in this email may not represent those of the
company.  It is expressly declared that this email does not constitute or
form part of a contract or unilateral obligation.  Please note that the
company monitors, analyses and archives email traffic, data and the
content of email for the purposes of security, legal compliance and staff
training.  If you have received this message in error, please notify the
company and remove it from your system.

iomart Group PLC is a Public Limited Company incorporated in Scotland with
registration number SC204560 and whose registered office is at Lister
Pavilion, Kelvin Campus, West of Scotland Science Park, Glasgow G20 0SP
**

> On 24 Mar 2021, at 15:16, Denis Silakov  wrote:
> 
> 
> Hi,
> 
> good to know the tool worked for you.
> 
> As for the ulimit limits - did you use OpenVZ default ones where Error 24 
> happened? And what numbers did you set that helped you to solve the problem?
> From: users-boun...@openvz.org  on behalf of Kevin 
> Drysdale 
> Sent: Monday, March 22, 2021 5:06 PM
> To: OpenVZ users 
> Subject: Re: [Users] virtuozzo base OS and new centos 8 orientations
>  
> Hello,
> 
> Thanks very much for this new utility, that's an impressive and handy thing 
> to have for sure.  I've been able to use it to convert a few test CentOS 8 
> containers seemingly without any errors or problems.  I had hoped I might be 
> able to test converting a container with Plesk installed, but I see you've 
> intentionally blocked that one - probably just as well :)
> 
> One hopefully-handy hint in case it causes issues for anyone else: initially 
> I received errors on my first conversion attempt indicating that there were 
> too many open files during the conversion process, and it exited with Error 
> 24.  I had to bump up the soft and hard limit for open files via 'ulimit', 
> but once I did that the conversion worked fine next time I tried it.
> 
> --
> 
> Kevin Drysdale
> Senior Systems Administrator
> 
> iomart
> 
> Tel: 0141 931 6400
> Fax: 0141 931 6401
> 
> https://twitter.com/iomart
> https://www.linkedin.com/company/iomart
> 
> www.iomart.com
> iomart - any cloud.  your way.
> 
> **
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential material.  The company
> cannot accept liability for any errors or omissions in the content of this
> message, which may arise as a result of email transmission.  Statements
> and opinions expressed in this email may not represent those of the
> company.  It is expressly declared that this email does not constitute or
> form part of a contract or unilateral obligation.  Please note that the
> company monitors, analyses and archives email traffic, data and the
> content of email for the purposes of security, legal compliance and staff
> training.  If you have received this message in error, please notify the
> company and remove it from your system.
> 
> iomart Group PLC is a Public Limited Company incorporated in Scotland with
> registration number SC204560 and whose registered office is at Lister
> Pavilion, Kelvin Campus, West of Scotland Science Park, Glasgow G20 0SP
> **
> 
>>> On 22 Mar 2021, at 07:12, Denis Silakov  wrote:
>>> 
>> 
>> Looks a bit strange indeed, vzdeploy was intended to just leave httpd from 
>> centos in such cases.
>> 
>> But now we have one more way for converting containers which will probably 
>> be more reliable. It utilizes vzpkg tool and should be launched from the 
>> server side:
>> 
>> # yum install vzdeploy8
>> # vzconvert8 convert 
>> From: users-boun...@ope

Re: [Users] virtuozzo base OS and new centos 8 orientations

2021-03-22 Thread Kevin Drysdale
Hello,

Thanks very much for this new utility, that's an impressive and handy thing to 
have for sure.  I've been able to use it to convert a few test CentOS 8 
containers seemingly without any errors or problems.  I had hoped I might be 
able to test converting a container with Plesk installed, but I see you've 
intentionally blocked that one - probably just as well :)

One hopefully-handy hint in case it causes issues for anyone else: initially I 
received errors on my first conversion attempt indicating that there were too 
many open files during the conversion process, and it exited with Error 24.  I 
had to bump up the soft and hard limit for open files via 'ulimit', but once I 
did that the conversion worked fine next time I tried it.

--

Kevin Drysdale
Senior Systems Administrator

iomart

Tel: 0141 931 6400
Fax: 0141 931 6401

https://twitter.com/iomart
https://www.linkedin.com/company/iomart

www.iomart.com
iomart - any cloud.  your way.

**
The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential material.  The company
cannot accept liability for any errors or omissions in the content of this
message, which may arise as a result of email transmission.  Statements
and opinions expressed in this email may not represent those of the
company.  It is expressly declared that this email does not constitute or
form part of a contract or unilateral obligation.  Please note that the
company monitors, analyses and archives email traffic, data and the
content of email for the purposes of security, legal compliance and staff
training.  If you have received this message in error, please notify the
company and remove it from your system.

iomart Group PLC is a Public Limited Company incorporated in Scotland with
registration number SC204560 and whose registered office is at Lister
Pavilion, Kelvin Campus, West of Scotland Science Park, Glasgow G20 0SP
**

> On 22 Mar 2021, at 07:12, Denis Silakov  wrote:
> 
> 
> Looks a bit strange indeed, vzdeploy was intended to just leave httpd from 
> centos in such cases.
> 
> But now we have one more way for converting containers which will probably be 
> more reliable. It utilizes vzpkg tool and should be launched from the server 
> side:
> 
> # yum install vzdeploy8
> # vzconvert8 convert 
> From: users-boun...@openvz.org  on behalf of Ian 
> 
> Sent: Thursday, March 11, 2021 6:48 PM
> To: OpenVZ users 
> Subject: Re: [Users] virtuozzo base OS and new centos 8 orientations
>  
> On 03/03/2021 10:46, Denis Silakov wrote:
> > Meanwhile, issue with kernel-headers should be fixed in the latest script.
> > 
> 
> Hi,
> 
> On a brand new centos8 container I get the following error from yum 
> after running the vzdeloy8 script (with the ignore kernel env set) :
> 
> -
> 
> Modular dependency problems:
> 
>   Problem 1: conflicting requests
>- nothing provides module(platform:el8) needed by module 
> httpd:2.4:8030020201104025655:30b713e6-0.x86_64
>   Problem 2: conflicting requests
>- nothing provides module(platform:el8) needed by module 
> python36:3.6:8030020201104034153:24f1489c-0.x86_64
> Dependencies resolved.
> Nothing to do.
> 
> --
> 
> I tried removing and reinstalling httpd* but that didn't resolve the error.
> 
> 
> Any ideas ?
> 
> 
> I feel this is very close now !
> Thanks for all your hard work.
> 
> Regards
> 
> Ian
> 
> 
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
> 
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
> 
> 
> 
> 




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Question about Plesk support in VzLinux 8

2021-02-11 Thread Kevin Drysdale

Hello,

As per the subject really: are there any plans that you are aware of for 
Plesk (which I realise is now a separate company from yourselves, these 
days) to support VzLinux 8 ?  I can see it has support for VzLinux 7, but 
attempts to install it on 8 fail, and the Plesk support matrix does not 
list it as a supported distro at this time.


A great many of our existing customers run Plesk in their containers, so 
sadly unless/until VzLinux 8 has Plesk support (or Plesk has VzLinux 8 
support, whichever way around it goes), we would not really be able to use 
it as a migration target for our CentOS 8 customers.


--

Kevin Drysdale
Senior Systems Administrator

iomart

Tel: 0141 931 6400
Fax: 0141 931 6401

https://twitter.com/iomart
https://www.linkedin.com/company/iomart

www.iomart.com
iomart - any cloud.  your way.

**
The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential material.  The company
cannot accept liability for any errors or omissions in the content of this
message, which may arise as a result of email transmission.  Statements
and opinions expressed in this email may not represent those of the
company.  It is expressly declared that this email does not constitute or
form part of a contract or unilateral obligation.  Please note that the
company monitors, analyses and archives email traffic, data and the
content of email for the purposes of security, legal compliance and staff
training.  If you have received this message in error, please notify the
company and remove it from your system.

iomart Group PLC is a Public Limited Company incorporated in Scotland with
registration number SC204560 and whose registered office is at Lister
Pavilion, Kelvin Campus, West of Scotland Science Park, Glasgow G20 0SP
**





___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] virtuozzo base OS and new centos 8 orientations

2021-01-11 Thread Kevin Drysdale
Hello,

Thank you, that's great.  I just thought I'd let you know I'm still not seeing 
any PHP packages in the repos for VzLinux 8 as of this morning, just in case 
something still needs to be done for these to show up.  If it's just a case of 
needing to wait a bit longer then I'll try again tomorrow.

--

Kevin Drysdale
Senior Systems Administrator

iomart







___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] virtuozzo base OS and new centos 8 orientations

2021-01-07 Thread Kevin Drysdale
Hello,

Thank you for this, that's very re-assuring news regarding OpenVZ/Virtuozzo 
itself.  I'm carrying out some testing for possible container operating system 
replacements for CentOS 8, and as per your recommendation I'm giving VzLinux 8 
a go.  However, one issue I seem to have encountered is a lack of any PHP 
packages in the virtuozzolinux-base repo for VzLinux 8.

I can of course obtain PHP packages in a variety of other ways from other 
sources, but before I go down that road I just wanted to check to make sure I'm 
not missing something obvious, and that it really is the case that VzLinux 8 
ships with no PHP.  We use OpenVZ for selling customer containers that are 
largely used for Web hosting purposes, so being able to provide the full LAMP 
stack is pretty much essential.

--

Kevin Drysdale
Senior Systems Administrator

iomart

Tel: 0141 931 6400
Fax: 0141 931 6401

https://twitter.com/iomart
https://www.linkedin.com/company/iomart

www.iomart.com
iomart - any cloud.  your way.

**
The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential material.  The company
cannot accept liability for any errors or omissions in the content of this
message, which may arise as a result of email transmission.  Statements
and opinions expressed in this email may not represent those of the
company.  It is expressly declared that this email does not constitute or
form part of a contract or unilateral obligation.  Please note that the
company monitors, analyses and archives email traffic, data and the
content of email for the purposes of security, legal compliance and staff
training.  If you have received this message in error, please notify the
company and remove it from your system.

iomart Group PLC is a Public Limited Company incorporated in Scotland with
registration number SC204560 and whose registered office is at Lister
Pavilion, Kelvin Campus, West of Scotland Science Park, Glasgow G20 0SP
**

> On 29 Dec 2020, at 19:07, Denis Silakov  wrote:
> 
> 
> Hi,
> 
> Yes, we already have VzLinux 8 template and going to proceed maintaining it 
> as a RHEL recompilation. Official roadmap is likely will be announced in the 
> near future, since it is indeed logical to suggest VzLinux as a CentOS 
> replacement for CTs.
> 
> As for Suse - we had templates for opensuse 42.x but it seems that there was 
> no much interest in them so currently there is no plans for new openSUSE. 
> SLES templates should be in a good shape, but they indeed require a valid 
> SLES license. 
> https://docs.virtuozzo.com/virtuozzo_hybrid_server_7_users_guide/managing-virtual-machines-and-containers/creating-virtual-machines-and-containers.html
>  describes how to setup sles 15 template.
> 2.1. Creating Virtual Machines and Containers
> 2.1. Creating Virtual Machines and Containers¶. This section explains how to 
> create new Virtuozzo virtual machines and containers using the prlctl create 
> command. The options you should pass to this command differ depending on 
> whether you want to create a virtual machine or container.
> docs.virtuozzo.com
> 
> From: users-boun...@openvz.org  on behalf of jehan 
> Procaccia tem-tsp 
> Sent: Monday, December 28, 2020 8:18 PM
> To: OpenVZ users 
> Subject: Re: [Users] virtuozzo base OS and new centos 8 orientations
>  
> I did not had any reply from my question regarding CT template for a centos 8 
> replacement in 2021 as a  rpm based, up to date  distrib with LTS support 
> I realize that I missed 2 other distribs in my previous post, appart from  
> centos/debian/fedora/ubuntu pre-package templates
> https://download.openvz.org/virtuozzo/releases/openvz-7.0.15-628/x86_64/os/Packages/s/
> I can see in that URL Suse and SLES , I guess first one is openSuse and 
> latter is SLes (with licenced needed ?) 
> but from that URL : 
> https://www.whatuptime.com/downloads/openvz-virtuozzo-7-templates/
> there is no SLes nor (open)Suse here ... will these distrib continue to be 
> available as vzlinux CT templates ? 
> 
> I also realized from that latter URL that VzLinux7 itself is available as a 
> CT template, if VzLinux continue to be a RHEL recompilation, then VzLinux8 
> (roadmap ?) would be a good alternative to centos 8 CT templates !? 
> 
> Thanks . 
> 
>> Le 13/12/2020 à 18:36, jehan Procaccia tem-tsp a écrit :
>> thanks, that's a releaf , I thought you built virtuozzo from centos, so if 
>> it is built directory from RHEL source we are safe .  
>> 
>> regarding CT templates, from download site: 
>> https://download.openvz.org/virtuozzo/releases/openvz-7.0.15-628/x86_64/os/Packages/
>> we can find centos/

Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-07 Thread Kevin Drysdale

Hello,

Thanks to all who have replied to this thread so far - my apologies for 
taking so long to get back to you all.


In terms of where I'm seeing the EXT4 errors, they are showing up in the 
kernel log on the node itself, so the output of 'dmesg' is regularly 
seeing entries such as these:


[375095.199203] EXT4-fs (ploop43209p1): Remounting filesystem read-only
[375095.199267] EXT4-fs error (device ploop43209p1) in 
ext4_ext_remove_space:3073: IO failure
[375095.199400] EXT4-fs error (device ploop43209p1) in ext4_ext_truncate:4692: 
IO failure
[375095.199517] EXT4-fs error (device ploop43209p1) in 
ext4_reserve_inode_write:5358: Journal has aborted
[375095.199637] EXT4-fs error (device ploop43209p1) in ext4_truncate:4145: 
Journal has aborted
[375095.199779] EXT4-fs error (device ploop43209p1) in 
ext4_reserve_inode_write:5358: Journal has aborted
[375095.199957] EXT4-fs error (device ploop43209p1) in ext4_orphan_del:2731: 
Journal has aborted
[375095.200138] EXT4-fs error (device ploop43209p1) in 
ext4_reserve_inode_write:5358: Journal has aborted
[461642.709690] EXT4-fs (ploop43209p1): error count since last fsck: 8
[461642.709702] EXT4-fs (ploop43209p1): initial error at time 1593576601: 
ext4_ext_remove_space:3000: inode 136354
[461642.709708] EXT4-fs (ploop43209p1): last error at time 1593576601: 
ext4_reserve_inode_write:5358: inode 136354

Inside the container itself, not much is being logged, since the affected 
container in in this particular instance is indeed (as per the errors 
above) mounted read-only due to the errors its root.hdd filesystem is 
experiencing.


Having dug a bit more into what happened here, I suspect that this 
corruption may have come about when the containers were being moved either 
to or from the standby node and the live node, but I can't be 100% sure of 
that.


The picture is further muddied in that the standby node (the node that we 
used for evacuating containers from the node to be updated) was itself 
initially updated to 7.0.14 (135).  However, the live node (which was 
updated a short time after the standby node) appears to have got 7.0.14 
(136).  So I don't know if the issue was in fact with 7.0.14 (135) (which 
was on the standby node, where the containers would have been moved to, 
and moved back from), or on 7.0.14 (136) on the live node.  Were there any 
known issues with 7.0.14 (135) that might correlate with what I'm seeing 
above ?


Anyway, once again, thanks to everyone who has replied so far.  If anyone 
has any further questions or would like any further information, please 
let me know and I will be happy to assist.


Thank you,
Kevin Drysdale.


On Thu, 2 Jul 2020, Jehan PROCACCIA wrote:


yes , you are right, I do get the same virtuozzo-release  as mentioned in the 
initial subject, sorry for the noise .

# cat /etc/virtuozzo-release
OpenVZ release 7.0.14 (136)

but anyway, I don't see any ploop / fsck error in the host /var/log/vzctl.log
inside the CT , where did you see those errors ?

Jehan .

_
De: "jjs - mainphrame" 
À: "OpenVZ users" 
Envoyé: Jeudi 2 Juillet 2020 19:33:23
Objet: Re: [Users] Issues after updating to 7.0.14 (136)

Thanks for that sanity check, the conundrum is resolved. vzlinux-release and 
virtuozzo-release are indeed different things.
Jake

On Thu, Jul 2, 2020 at 10:27 AM Jonathan Wright  wrote:

  /etc/redhat-release and /etc/virtuozzo-release are two different things.

  On 7/2/20 12:16 PM, jjs - mainphrame wrote:
  Jehan - 

  I get the same output here -

  [root@annie ~]# yum repolist  |grep virt
  virtuozzolinux-base    VirtuozzoLinux Base                            
15,415+189
  virtuozzolinux-updates VirtuozzoLinux Updates                             
     0

  I'm baffled as to how you're on 7.8.0 while I'm at 7.0,15 even though I'm 
fully up to date.

  # uname -a
  Linux annie.ufcfan.org 3.10.0-1127.8.2.vz7.151.10 #1 SMP Mon Jun 1 
19:05:52 MSK 2020 x86_64 x86_64 x86_64 GNU/Linux

Jake

On Thu, Jul 2, 2020 at 10:08 AM Jehan PROCACCIA  
wrote:
  no factory , just repos virtuozzolinux-base and openvz-os

# yum repolist  |grep virt
virtuozzolinux-base    VirtuozzoLinux Base    15 415+189
virtuozzolinux-updates VirtuozzoLinux Updates  0

Jehan .

_
De: "jjs - mainphrame" 
À: "OpenVZ users" 
Cc: "Kevin Drysdale" 
Envoyé: Jeudi 2 Juillet 2020 18:22:33
Objet: Re: [Users] Issues after updating to 7.0.14 (136)

Jehan, are you running factory?

My ovz hosts are up to date, and I see:

[root@annie ~]# cat /etc/virtuozzo-release
OpenVZ release 7.0.15 (222)

Jake


On Thu, 

[Users] Issues after updating to 7.0.14 (136)

2020-06-29 Thread Kevin Drysdale

Hello,

After updating one of our OpenVZ VPS hosting nodes at the end of last 
week, we've started to have issues with corruption apparently occurring 
inside containers.  Issues of this nature have never affected the node 
previously, and there do not appear to be any hardware issues that could 
explain this.


Specifically, a few hours after updating, we began to see containers 
experiencing errors such as this in the logs:


[90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25
[90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255: 
ext4_ext_find_extent:904: inode 136399
[90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922: 
ext4_ext_find_extent:904: inode 136399
[95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67
[95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174: 
htree_dirblock_to_tree:918: inode 926441: block 3683060
[95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902: 
ext4_iget:4435: inode 1849777
[95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42
[95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489: 
ext4_ext_find_extent:904: inode 136272
[95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063: 
ext4_ext_find_extent:904: inode 136272

Shutting the containers down and manually mounting and e2fsck'ing their 
filesystems did clear these errors, but each of the containers (which were 
mostly used for running Plesk) had widespread issues with corrupt or 
missing files after the fsck's completed, necessitating their being 
restored from backup.


Concurrently, we also began to see messages like this appearing in 
/var/log/vzctl.log, which again have never appeared at any point prior to 
this update being installed:


/var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288448/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288450/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288451/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288452/root.hdd/root.hds' is sparse

The basic procedure we follow when updating our nodes is as follows:

1, Update the standby node we keep spare for this process
2. vzmigrate all containers from the live node being updated to the 
standby node

3. Update the live node
4. Reboot the live node
5. vzmigrate the containers from the standby node back to the live node 
they originally came from


So the only tool which has been used to affect these containers is 
'vzmigrate' itself, so I'm at something of a loss as to how to explain 
the root.hdd images for these containers containing sparse gaps.  This is 
something we have never done, as we have always been aware that OpenVZ 
does not support their use inside a container's hard drive image.  And the 
fact that these images have suddenly become sparse at the same time they 
have started to exhibit filesystem corruption is somewhat concerning.


We can restore all affected containers from backups, but I wanted to get 
in touch with the list to see if anyone else at any other site has 
experienced these or similar issues after applying the 7.0.14 (136) 
update.


Thank you,
Kevin Drysdale.

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Issues after updating to 7.0.14 (136)

2020-06-29 Thread Kevin Drysdale

Hello,

After updating one of our OpenVZ VPS hosting nodes at the end of last 
week, we've started to have issues with corruption apparently occurring 
inside containers.  Issues of this nature have never affected the node 
previously, and there do not appear to be any hardware issues that could 
explain this.


Specifically, a few hours after updating, we began to see containers 
experiencing errors such as this in the logs:


[90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25
[90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255: 
ext4_ext_find_extent:904: inode 136399
[90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922: 
ext4_ext_find_extent:904: inode 136399
[95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67
[95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174: 
htree_dirblock_to_tree:918: inode 926441: block 3683060
[95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902: 
ext4_iget:4435: inode 1849777
[95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42
[95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489: 
ext4_ext_find_extent:904: inode 136272
[95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063: 
ext4_ext_find_extent:904: inode 136272

Shutting the containers down and manually mounting and e2fsck'ing their 
filesystems did clear these errors, but each of the containers (which were 
mostly used for running Plesk) had widespread issues with corrupt or 
missing files after the fsck's completed, necessitating their being 
restored from backup.


Concurrently, we also began to see messages like this appearing in 
/var/log/vzctl.log, which again have never appeared at any point prior to 
this update being installed:


/var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288448/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288450/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288451/root.hdd/root.hds' is sparse
/var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole (check.c:240): 
Warning: ploop image '/vz/private/8288452/root.hdd/root.hds' is sparse

The basic procedure we follow when updating our nodes is as follows:

1, Update the standby node we keep spare for this process
2. vzmigrate all containers from the live node being updated to the 
standby node

3. Update the live node
4. Reboot the live node
5. vzmigrate the containers from the standby node back to the live node 
they originally came from


So the only tool which has been used to affect these containers is 
'vzmigrate' itself, so I'm at something of a loss as to how to explain 
the root.hdd images for these containers containing sparse gaps.  This is 
something we have never done, as we have always been aware that OpenVZ 
does not support their use inside a container's hard drive image.  And the 
fact that these images have suddenly become sparse at the same time they 
have started to exhibit filesystem corruption is somewhat concerning.


We can restore all affected containers from backups, but I wanted to get 
in touch with the list to see if anyone else at any other site has 
experienced these or similar issues after applying the 7.0.14 (136) 
update.


Thank you,
Kevin Drysdale.




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users