Re: [Users] Status of OVZ 8 & 9

2024-04-12 Thread jjs - mainphrame
I've been asking the same question.

As much as I like openvz (using it since 2010) my last openvz server is now
running in a VM under proxmox.

In my experience openvz containers are more reliable than proxmox, but
proxmox does have a very nice web interface.

I hope the openvz project is revived before it's too late.

J

On Fri, Apr 12, 2024 at 8:35 AM jehan Procaccia <
jehan.procac...@imtbs-tsp.eu> wrote:

> Hi
>
> a year later ... I give a try to OVZ 9 from the latest ISO I could find
> in  repo factory9 (right place ?) :
>
>
> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-667.iso
>
>
> but, still in dates from september 2023 ...
> *25-Sep-2023 20:02 * *2.8G*
>
> hopefully we'll get at least a Beta available before RHEL7 gets EOL, June
> 2024 .
>
> the ISO install well,* prlctl *package is still not there by default, I
> had to install it manually (!?)
>
> is it still unclear why we get a *deprecated *message when using *prlctl
> (vz7) *command
>
> *WARNING: You are using a deprecated CLI component that won't be installed
> by default in the next major release. Please use virsh instead*
>
> does *virsh *will replace *prlctl *in VZ9 ? I understand it for VMs, but
> for CTs !?
>
> but, still *prlctl* package (*prlctl-9.0.2-1.vz9.x86_64)* now installs
> correctly (no more rpm pgp signature failure), but fails to run :
>
>
> *# prlctl list *
>
> *prlctl: symbol lookup error: prlctl: undefined symbol:
> PrlVmCfg_SetNetfilterMode*
>
>
> It would be very helpfull for acadmic as us to get a up2date , even Alpha
> release of OpenVZ9 , if you want the community to stay with
> OpenVZ/Virtuozzo.
>
> I know dozen of sysadmins around me that quit VMware to Proxmox ... We
> have a short opportunity to let them give it a try to OpenVZ, but as 7 will
> EOM very soon and VZ9 is not testable, that's not very handy .
>
> Lets us know what is the roadmap regarding OVZ9 .
>
> Thanks .
>
> jehan
>
>
>
> On 13/02/2023 09:09, jehan Procaccia wrote:
>
> good, let us know .
>
> I did opened a bug report regarding this issue
>
> https://bugs.openvz.org/browse/OVZ-7419
>
> it was marked as resolved last week, but I still fail to install prlctl
> (just did dnf clean all) , so I reoponed the issue.
>
> maybe the fix is in that new iso ? or I should uninstall / reinstall
> openvz-release-9.0.1-383.vz9.x86_64 package ? didn't tried that because it
> also needs to remove 75 packages (qemu* ...) as dependances .
>
> Jehan
>
> PS: anyway, if prlctl finally get installed, is this the way to go , the
> "deprecated" message is not reassuring .
> On 13/02/2023 06:11, jjs - mainphrame wrote:
>
> I see there's a new pre-release iso, downloading it now -
>
>
> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-412.iso
>
> Jake
>
> On Thu, Dec 8, 2022 at 4:16 PM jjs - mainphrame 
> wrote:
>
>> I've been running openvz 7 for some years, and I periodically check on
>> the status of openvz 8 and 9.
>>
>> While openvz 7 has been getting updates, it seems openvz 8 is fairly
>> static, and openvz 9 seems not ready for use.
>>
>> Is there an intent to continue support of openvz beyond version 7?
>>
>> Since openvz is a great advertisement for virtuozzo, it would be a shame
>> if it faded away.
>>
>> Jake
>>
>
> ___
> Users mailing 
> listUsers@openvz.orghttps://lists.openvz.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Future of openvz

2023-08-28 Thread jjs - mainphrame
Hello all,

Where can I go to find the latest status or roadmap on openvz 9?

This list seems quiet as a graveyard lately, and I haven't seen any sort of
a usable openvz 9 release yet. On the day a functional openvz 9 beta is
released, I will jump for joy. I have some hardware set aside for that
eventuality.

Until then, my last remaining OVZ 7 machine lives on as a VM in proxmox 8.

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] New OVZ9 iso pre-release

2023-07-11 Thread jjs - mainphrame
I've tried each new openvz 9 pre-release and found it to be broken in the
same ways.

For now, I'm going to put openvz 9 testing on the back burner. I'm still
stubbornly running core services on my openvz 7 machines, as their
reliability is well proven, but as for the openvz 9 test hardware, I've
wiped it, and installed proxmox 8.

I'll continue to monitor the list, and hopefully at some point there will
be a release candidate for openvz 9.

Jake

On Sat, May 27, 2023 at 1:32 PM jjs - mainphrame  wrote:

> It was worth a shot.
>
> Still quite problematic, not usable. Interestingly, the VMs running OVZ9
> pre-release are still running host routed containers but the OVZ9 install
> on a physical machine is quite broken in many ways.
>
> I installed from openvz-iso-9.0.1-550.iso, and was able to create a
> container, which seemed to work perfectly well. Then, I installed prlctl,
> which, when invoked, yields this output:
>
> "prlctl: symbol lookup error: prlsrvctl: undefined symbol:
> PrlVmCfg_SetNetfilterMode"
>
> I installed prl-disp-service, which changed the prlctl error to one about
> being unable to contact vz.
>
> So, I installed vcmmd, which pulled in numerous dependencies, and
> apparently downgraded ovz 9.0.1-550 to ovz 9.0.0-264
>
> # cat /etc/virtuozzo-release
> OpenVZ release 9.0.0 (264)
>
> The container created previously has disappeared, and the network is
> broken:
>
> # prlsrvctl net list
> WARNING: You are using a deprecated CLI component that will be dropped in
> the next major release. Please use virsh instead
> Failed to retrieve the list of Virtual Networks: Unexpected error. An
> unexpected error occurred.
>
> Oh, well, there's always the next pre-release to look forward to.
>
> Jake
>
> On Fri, May 26, 2023 at 9:54 AM jjs - mainphrame 
> wrote:
>
>> Downloading now from
>> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/
>>
>> Will test and comment back here - hoping for some improvement in
>> functionality.
>>
>> Jake
>>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] New OVZ9 iso pre-release

2023-05-27 Thread jjs - mainphrame
It was worth a shot.

Still quite problematic, not usable. Interestingly, the VMs running OVZ9
pre-release are still running host routed containers but the OVZ9 install
on a physical machine is quite broken in many ways.

I installed from openvz-iso-9.0.1-550.iso, and was able to create a
container, which seemed to work perfectly well. Then, I installed prlctl,
which, when invoked, yields this output:

"prlctl: symbol lookup error: prlsrvctl: undefined symbol:
PrlVmCfg_SetNetfilterMode"

I installed prl-disp-service, which changed the prlctl error to one about
being unable to contact vz.

So, I installed vcmmd, which pulled in numerous dependencies, and
apparently downgraded ovz 9.0.1-550 to ovz 9.0.0-264

# cat /etc/virtuozzo-release
OpenVZ release 9.0.0 (264)

The container created previously has disappeared, and the network is broken:

# prlsrvctl net list
WARNING: You are using a deprecated CLI component that will be dropped in
the next major release. Please use virsh instead
Failed to retrieve the list of Virtual Networks: Unexpected error. An
unexpected error occurred.

Oh, well, there's always the next pre-release to look forward to.

Jake

On Fri, May 26, 2023 at 9:54 AM jjs - mainphrame  wrote:

> Downloading now from
> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/
>
> Will test and comment back here - hoping for some improvement in
> functionality.
>
> Jake
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] New OVZ9 iso pre-release

2023-05-26 Thread jjs - mainphrame
Downloading now from
https://download.openvz.org/virtuozzo/factory9/x86_64/iso/

Will test and comment back here - hoping for some improvement in
functionality.

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Current status of OVZ 9 pre-release

2023-04-22 Thread jjs - mainphrame
I've been experimenting with OVZ 9 pre-releases, and they're coming along,
but there is a way to go.

At present, OVZ 9 is a good platform for running host routed containers.

Beyond that, things are still in flux. Bridged networking is not yet
functional, and prlctl in general is broken, so that VMs cannot yet be
created - e.g.

Failed to register the VM: Unable to connect to Virtuozzo. You may
experience a connection problem or the server may be down. Contact your
Virtuozzo administrator for assistance.
Failed to create the virtual machine.

I will continue testing and filing bug reports, and look forward to the
production ready release.

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Another reason I use openvz

2023-04-21 Thread jjs - mainphrame
Paulo & Jehan,

Agree with everything you said. I would love to see more openvz-based
solutions, and competition could not hurt.

I think proxmox would have stayed with openvz, but it's Debian based.

Openvz used to be in the debian repos and worked well. But after the
focus of openvz was narrowed to Red Hat, and support for Debian was
stopped, proxmox had no choice but to settle for lxc.

The one killer feature of proxmox is the excellent web management
interface. If openvz had something of the same sort, there would be no
contest between the two.

Jake

On Fri, Apr 21, 2023 at 3:09 AM Paulo Coghi - Coghi IT 
wrote:

> I agree 100% with everything both of you stated, and I can't explain what
> happened in the past that set such an image on OpenVZ.
>
> From a technical point of view, OpenVZ is both more secure [1], stable and
> even slightly faster and lightweight than LXC [2].
>
> But I would like your opinion about a paradox I'm living now: would
> another open source server virtualization platform be helpful to the
> project and to Virtuozzo, by providing more exposure (being exclusively
> based on OpenVZ and MIT licensed), or would it be harmful to Virtuozzo,
> because of (possible) "draining" of commercial customers?
>
> I believe I'm ready to start such a project, but last time I asked the
> Virtuozzo team, I didn't receive any response about this topic in
> particular, and I really appreciate the feedback from you.
>
> If both Virtuozzo and you, the community, agree that this would help give
> more life and exposure to OpenVZ (and Virtuozzo), I am fully interested in
> starting it.
>
>
> Paulo Coghi
>
>
> [1]
> https://security.stackexchange.com/questions/80532/security-of-lxc-compared-to-openvz
> [2] https://www.diva-portal.org/smash/get/diva2:1052217/FULLTEXT02.pdf
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Another reason I use openvz

2023-04-20 Thread jjs - mainphrame
While I've seen lxc containers mysteriously hang, suffer bit rot, or self
destruct, the openvz containers have been solid.

One of my OVZ-7 servers had an old centos 7 container that I used for
testing haproxy, that I'd turned off in 2019.

I vgmigrated it to a new OVZ-9 test machine, started it, and it worked just
like it hadn't been turned off for 4 years.

Anyway, I'm looking forward to the production release of OVZ 9

jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Having trouble with creating bridged networks on OVZ-9

2023-04-18 Thread jjs - mainphrame
Hi Paulo,

Yes, I installed the 491 release.

I simply ran the following command

yum install prlctl --nogpgcheck

Jake

On Tue, Apr 18, 2023 at 2:32 AM Paulo Coghi - Coghi IT 
wrote:

> Hello Jake,
>
> Could you share the steps you are using to install "prlctl"?
> Are you using the release 491, available on
> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-491.iso
> ?
>
>
> Paulo Coghi
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ 7 and vztt/OVZT repo

2023-04-17 Thread jjs - mainphrame
Templates are still getting updated here and I also have vztt.

[root@hachi ~]# cat /etc/virtuozzo-release
OpenVZ release 7.0.19 (347)

Does your repolist resemble the attached image?


Jake

On Sat, Apr 15, 2023 at 9:49 PM Alexey Zilber 
wrote:

> Hi All,
>
>   I've been using an old custom OS template I made a long time ago, and
> decided today to update the list of OS Ez templates.  I'm running
> "Virtuozzo Linux release 7.9" as per  cat /etc/redhat-release.
>
>  Going through the old threads it seems vzpkg is deprecated, and in fact
> it doesn't pull down anything new.  vztt though is missing for me, both in
> repos (using yum search) or locally.
>
> How do I go about using https://src.openvz.org/projects/OVZT?  There
> seems to be no documentation except a link to that repo. I seem to have
> "vztt_checker" installed, but that's a shared object.
>
> So I'm kind of stuck.. is there a package that I need to install to get
> access to those repos?
>
> Thanks!
> Alex
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Having trouble with creating bridged networks on OVZ-9

2023-04-16 Thread jjs - mainphrame
FWIW, I first tried using the physical interface instead of the bridge,
which also fails:

[root@katyusha ~]# prlsrvctl net set firewalla -t bridged --ifname enp5s0
-d "firewalla bridge"
WARNING: You are using a deprecated CLI component that will be dropped in
the next major release. Please use virsh instead
Failed to update Virtual Network firewalla: Could not find a bridge for
network adapter "firewalla". Please make sure the bridge is created.

Jake

On Sat, Apr 15, 2023 at 8:56 PM jjs - mainphrame  wrote:

> Hello all,
>
> After testing ovz-9 alpha releases for some weeks using VMs, I have OVZ9
> installed on a physical machine to do some further testing. Containers with
> host routed connections are working fine, and can migrate to and from OVZ-7
> machines.
>
> So I wanted to go to the next level, create lan and wan bridges for
> bridged networks, but it seems the prlsrvctl syntax has changed, and the
> commands no longer work as they did on OVZ-7.
>
> A bridge br0 was created by the installer on the internal lan, and I added
> a bridge br1 which connects to the default gateway device, but trying to
> associate a bridge with a network fails with an error.
>
> I created the network "Bridged" (which used to be automatically created on
> OVZ-7 install) and it defaulted to type host-only. Then I added the network
> "firewalla" as I have done on the OVZ-7 machine, and it also defaulted to
> host-only"
>
> root@katyusha ~]# prlsrvctl net list
> WARNING: You are using a deprecated CLI component that will be dropped in
> the next major release. Please use virsh instead
> Network IDType  Bound To   Bridge Slave interfaces
> Bridged   host-onlyvirbr3
> Host-Only host-onlyvirbr1
> firewalla host-onlyvirbr4
>
> My next step was to change the type from "host-only" to "bridged", and
> that's where things went south:
>
> [root@katyusha ~]# prlsrvctl net set Bridged -t bridged --ifname br0 -d
> "internal bridge"
> WARNING: You are using a deprecated CLI component that will be dropped in
> the next major release. Please use virsh instead
> Failed to find network adapter br0 on the server.
>
> [root@katyusha ~]# prlsrvctl net set firewalla -t bridged --ifname br1 -d
> "firewalla bridge"
> WARNING: You are using a deprecated CLI component that will be dropped in
> the next major release. Please use virsh instead
> Failed to find network adapter br1 on the server.
>
> Is there some documentation on the updated syntax?
>
> Jake
>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Having trouble with creating bridged networks on OVZ-9

2023-04-15 Thread jjs - mainphrame
Hello all,

After testing ovz-9 alpha releases for some weeks using VMs, I have OVZ9
installed on a physical machine to do some further testing. Containers with
host routed connections are working fine, and can migrate to and from OVZ-7
machines.

So I wanted to go to the next level, create lan and wan bridges for
bridged networks, but it seems the prlsrvctl syntax has changed, and the
commands no longer work as they did on OVZ-7.

A bridge br0 was created by the installer on the internal lan, and I added
a bridge br1 which connects to the default gateway device, but trying to
associate a bridge with a network fails with an error.

I created the network "Bridged" (which used to be automatically created on
OVZ-7 install) and it defaulted to type host-only. Then I added the network
"firewalla" as I have done on the OVZ-7 machine, and it also defaulted to
host-only"

root@katyusha ~]# prlsrvctl net list
WARNING: You are using a deprecated CLI component that will be dropped in
the next major release. Please use virsh instead
Network IDType  Bound To   Bridge Slave interfaces
Bridged   host-onlyvirbr3
Host-Only host-onlyvirbr1
firewalla host-onlyvirbr4

My next step was to change the type from "host-only" to "bridged", and
that's where things went south:

[root@katyusha ~]# prlsrvctl net set Bridged -t bridged --ifname br0 -d
"internal bridge"
WARNING: You are using a deprecated CLI component that will be dropped in
the next major release. Please use virsh instead
Failed to find network adapter br0 on the server.

[root@katyusha ~]# prlsrvctl net set firewalla -t bridged --ifname br1 -d
"firewalla bridge"
WARNING: You are using a deprecated CLI component that will be dropped in
the next major release. Please use virsh instead
Failed to find network adapter br1 on the server.

Is there some documentation on the updated syntax?

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Updates on OpenVZ 9 testing - 9.0.0-264 and 9.0.1-458 releases

2023-04-03 Thread jjs - mainphrame
My 458 pre-release is up and running here, and I can install packages with
the --nogpgcheck option, but I'll wait for the key issues to be solved
before installing on a physical machine for more involved testing.

Joe

On Mon, Apr 3, 2023 at 5:00 AM Paulo Coghi - Coghi IT 
wrote:

> Sorry. Now I read your comments on the issue, and I see the problem still
> persists.
>
> On Mon, Apr 3, 2023 at 1:54 PM Paulo Coghi - Coghi IT <
> pauloco...@gmail.com> wrote:
>
>> Hi Jake,
>>
>> Were you able to overcome the GPG keys issue in release 458? If yes, how?
>>
>> On Fri, Mar 31, 2023 at 10:12 PM jjs - mainphrame 
>> wrote:
>>
>>> Hi Paulo,
>>>
>>> Try release 458 from here:
>>>
>>> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/
>>>
>>> I'm running it in a VM and testing with containers but it's an
>>> improvement over the previous pre-releases.
>>>
>>> jake
>>>
>>> On Fri, Mar 31, 2023 at 2:14 AM Paulo Coghi - Coghi IT <
>>> pauloco...@gmail.com> wrote:
>>>
>>>> Hello OpenVZ community,
>>>>
>>>> Today I tried the 9.0.0-264 release on
>>>> https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/openvz-iso-9.0.0-264.iso,
>>>> but it doesn't boot properly.
>>>>
>>>> The printed error messages were:
>>>>
>>>> [4.798795] systemd[1]: Assertion 't > 0' failed at
>>>> src/basic/limits-util.c:182, function system_tasks_max_scale(). Aborting.
>>>> [4.799584] systemd[1]: Caught , core dump failed (child 280,
>>>> code=killed, status=6/ABRT)
>>>> [4.799762] systemd[1]: Freezing execution
>>>>
>>>>
>>>> In a few minutes I will try the 9.0.1-458 version located on "factory"
>>>> and post the results.
>>>> ___
>>>> Users mailing list
>>>> Users@openvz.org
>>>> https://lists.openvz.org/mailman/listinfo/users
>>>>
>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Updates on OpenVZ 9 testing - 9.0.0-264 and 9.0.1-458 releases

2023-03-31 Thread jjs - mainphrame
Hi Paulo,

Try release 458 from here:

https://download.openvz.org/virtuozzo/factory9/x86_64/iso/

I'm running it in a VM and testing with containers but it's an improvement
over the previous pre-releases.

jake

On Fri, Mar 31, 2023 at 2:14 AM Paulo Coghi - Coghi IT 
wrote:

> Hello OpenVZ community,
>
> Today I tried the 9.0.0-264 release on
> https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/openvz-iso-9.0.0-264.iso,
> but it doesn't boot properly.
>
> The printed error messages were:
>
> [4.798795] systemd[1]: Assertion 't > 0' failed at
> src/basic/limits-util.c:182, function system_tasks_max_scale(). Aborting.
> [4.799584] systemd[1]: Caught , core dump failed (child 280,
> code=killed, status=6/ABRT)
> [4.799762] systemd[1]: Freezing execution
>
>
> In a few minutes I will try the 9.0.1-458 version located on "factory" and
> post the results.
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-02-20 Thread jjs - mainphrame
So, openvz-iso-9.0.1-417.iso is a bust.

After installation, no bootable disk could be found. I booted up in rescue
mode and there was an install there, so I chrooted to it and ran grub
install, and even though it seemed to succeed, there was still no bootable
disk found on the next startup.

I'll file this one away under unsuccessful experiments.

Jake

still

On Mon, Feb 20, 2023 at 1:23 PM jjs - mainphrame  wrote:

> Downloading the most recent ovz9 iso, let's see how it goes...
>
> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/
>
> Jake
>
> On Mon, Feb 13, 2023 at 3:18 PM jjs - mainphrame 
> wrote:
>
>> Unfortunately, I ran into the same issue, and filed a bug report.
>>
>> Jake
>>
>> On Mon, Feb 13, 2023 at 11:16 AM Paulo Coghi - Coghi IT <
>> pauloco...@gmail.com> wrote:
>>
>>> Jake, let us know if the new ISO solves the issue!
>>>
>>>
>>> good, let us know .
>>>>
>>>> I did opened a bug report regarding this issue
>>>>
>>>> https://bugs.openvz.org/browse/OVZ-7419
>>>>
>>>> it was marked as resolved last week, but I still fail to install prlctl
>>>> (just did dnf clean all) , so I reoponed the issue.
>>>>
>>>> maybe the fix is in that new iso ? or I should uninstall / reinstall
>>>> openvz-release-9.0.1-383.vz9.x86_64 package ? didn't tried that because it
>>>> also needs to remove 75 packages (qemu* ...) as dependances .
>>>>
>>>> Jehan
>>>>
>>>> PS: anyway, if prlctl finally get installed, is this the way to go ,
>>>> the "deprecated" message is not reassuring .
>>>> On 13/02/2023 06:11, jjs - mainphrame wrote:
>>>>
>>>> I see there's a new pre-release iso, downloading it now -
>>>>
>>>>
>>>> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-412.iso
>>>>
>>>> Jake
>>>>
>>>> _
>>>> Users mailing list
>>>> Users@openvz.org
>>>> https://lists.openvz.org/mailman/listinfo/users
>>>>
>>>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-02-20 Thread jjs - mainphrame
Downloading the most recent ovz9 iso, let's see how it goes...

https://download.openvz.org/virtuozzo/factory9/x86_64/iso/

Jake

On Mon, Feb 13, 2023 at 3:18 PM jjs - mainphrame  wrote:

> Unfortunately, I ran into the same issue, and filed a bug report.
>
> Jake
>
> On Mon, Feb 13, 2023 at 11:16 AM Paulo Coghi - Coghi IT <
> pauloco...@gmail.com> wrote:
>
>> Jake, let us know if the new ISO solves the issue!
>>
>>
>> good, let us know .
>>>
>>> I did opened a bug report regarding this issue
>>>
>>> https://bugs.openvz.org/browse/OVZ-7419
>>>
>>> it was marked as resolved last week, but I still fail to install prlctl
>>> (just did dnf clean all) , so I reoponed the issue.
>>>
>>> maybe the fix is in that new iso ? or I should uninstall / reinstall
>>> openvz-release-9.0.1-383.vz9.x86_64 package ? didn't tried that because it
>>> also needs to remove 75 packages (qemu* ...) as dependances .
>>>
>>> Jehan
>>>
>>> PS: anyway, if prlctl finally get installed, is this the way to go , the
>>> "deprecated" message is not reassuring .
>>> On 13/02/2023 06:11, jjs - mainphrame wrote:
>>>
>>> I see there's a new pre-release iso, downloading it now -
>>>
>>>
>>> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-412.iso
>>>
>>> Jake
>>>
>>> _
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-02-14 Thread jjs - mainphrame
Installed the OS, did a successful yum update, but received an error when
installing prlctl and prl-disp-service:

Error: Transaction test error:
  file /etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9 from install of
openvz-release-9.0.0-257.vz9.x86_64 conflicts with file from package
vzlinux-release-3:9.0-39.vl9.x86_64

Based on experiences with previous pre-release, I won't pursue this issue
further until the next iso appears.

Jake

On Mon, Feb 13, 2023 at 12:09 AM jehan Procaccia <
jehan.procac...@imtbs-tsp.eu> wrote:

> good, let us know .
>
> I did opened a bug report regarding this issue
>
> https://bugs.openvz.org/browse/OVZ-7419
>
> it was marked as resolved last week, but I still fail to install prlctl
> (just did dnf clean all) , so I reoponed the issue.
>
> maybe the fix is in that new iso ? or I should uninstall / reinstall
> openvz-release-9.0.1-383.vz9.x86_64 package ? didn't tried that because it
> also needs to remove 75 packages (qemu* ...) as dependances .
>
> Jehan
>
> PS: anyway, if prlctl finally get installed, is this the way to go , the
> "deprecated" message is not reassuring .
> On 13/02/2023 06:11, jjs - mainphrame wrote:
>
> I see there's a new pre-release iso, downloading it now -
>
>
> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-412.iso
>
> Jake
>
> On Thu, Dec 8, 2022 at 4:16 PM jjs - mainphrame 
> wrote:
>
>> I've been running openvz 7 for some years, and I periodically check on
>> the status of openvz 8 and 9.
>>
>> While openvz 7 has been getting updates, it seems openvz 8 is fairly
>> static, and openvz 9 seems not ready for use.
>>
>> Is there an intent to continue support of openvz beyond version 7?
>>
>> Since openvz is a great advertisement for virtuozzo, it would be a shame
>> if it faded away.
>>
>> Jake
>>
>
> ___
> Users mailing 
> listUsers@openvz.orghttps://lists.openvz.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-02-13 Thread jjs - mainphrame
Unfortunately, I ran into the same issue, and filed a bug report.

Jake

On Mon, Feb 13, 2023 at 11:16 AM Paulo Coghi - Coghi IT <
pauloco...@gmail.com> wrote:

> Jake, let us know if the new ISO solves the issue!
>
>
> good, let us know .
>>
>> I did opened a bug report regarding this issue
>>
>> https://bugs.openvz.org/browse/OVZ-7419
>>
>> it was marked as resolved last week, but I still fail to install prlctl
>> (just did dnf clean all) , so I reoponed the issue.
>>
>> maybe the fix is in that new iso ? or I should uninstall / reinstall
>> openvz-release-9.0.1-383.vz9.x86_64 package ? didn't tried that because it
>> also needs to remove 75 packages (qemu* ...) as dependances .
>>
>> Jehan
>>
>> PS: anyway, if prlctl finally get installed, is this the way to go , the
>> "deprecated" message is not reassuring .
>> On 13/02/2023 06:11, jjs - mainphrame wrote:
>>
>> I see there's a new pre-release iso, downloading it now -
>>
>>
>> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-412.iso
>>
>> Jake
>>
>> _
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-02-12 Thread jjs - mainphrame
I see there's a new pre-release iso, downloading it now -

https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-412.iso

Jake

On Thu, Dec 8, 2022 at 4:16 PM jjs - mainphrame  wrote:

> I've been running openvz 7 for some years, and I periodically check on the
> status of openvz 8 and 9.
>
> While openvz 7 has been getting updates, it seems openvz 8 is fairly
> static, and openvz 9 seems not ready for use.
>
> Is there an intent to continue support of openvz beyond version 7?
>
> Since openvz is a great advertisement for virtuozzo, it would be a shame
> if it faded away.
>
> Jake
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-02-05 Thread jjs - mainphrame
Thanks for sharing this work, it allowed me to make some progress.

There seem to be other issues though. Hopefully the fixes will come.

Jake



On Sun, Feb 5, 2023 at 2:12 PM Aurélien GUERSON <
aurelien.guer...@imtbs-tsp.eu> wrote:

> Hi guys,
>
> I tried something and it seems ok.
>
> If you want update
>
> 1) disable gpg check
>
> 2) disable factory ( never use factory ! )
>
> 3) install python3-devel rpmdevtools rpmlint
>
> 4) remove vz-release
>
> rpm -e --nodeps vzlinux-release
>
> 5) download the src
>
> cd /usr/src/
>
> wget
>
> http://repo.virtuozzo.com/vzlinux/9.0/source/SRPMS/v/vzlinux-release-9.0-39.vl9.src.rpm
>
>
> 6) install it
>
> rpm -ivh vzlinux-release-9.0-39.vl9.src.rpm
>
> 7) modify the SPEC file
>
> cd /root/rpmbuild/SPECS/
>
> vim xx.spec
>
> delete source13 and SOURCE13 from spec file
>
> change 39 to 40 in the version
>
> 8) recreate .rpm file
>
> rpmbuild -ba ~/rpmbuild/SPECS/xx.spec
>
> 9) install it
>
> yum install /root/rpmbuild/RPMS/x86_64/xx.rpm
>
> 10) update all
>
> yum clean all
>
> yum update
>
> 11) install prlctl
>
> yum install prlctl
>
>
> => it still have the warning message with the deprecated version.
>
>
> now you can try.
>
>
> Regards,
>
>
> --
> Aurélien GUERSON
>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-02-04 Thread jjs - mainphrame
Everything was looking good until the end:

Error: Transaction test error:
  file /etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9 from install of
openvz-release-9.0.0-257.vz9.x86_64 conflicts with file from package
vzlinux-release-3:9.0-39.vl9.x86_64

So, it looks like they still have some things to be fix before we can
really test it

Jake

On Sat, Feb 4, 2023 at 1:16 AM Paulo Coghi - Coghi IT 
wrote:

> When possible, try installing all the packages below and try again:
>
> yum install prlctl prl-disp-service vcztl
>
> On Fri, Feb 3, 2023 at 5:17 PM jjs - mainphrame 
> wrote:
>
>> I temporarily set gpgcheck=0 in the repo files and was able to install
>> prlctl, but it was not functional because there are other dependencies that
>> need to be installed and running.
>>
>> [root@lavrov ~]# prlctl list
>> Login failed: Unable to connect to Virtuozzo. You may experience a
>> connection problem or the server may be down. Contact your Virtuozzo
>> administrator for assistance.
>>
>> I'll need to install on a physical box to go any farther..
>>
>> Jake
>>
>> On Fri, Feb 3, 2023 at 2:14 AM jehan Procaccia <
>> jehan.procac...@imtbs-tsp.eu> wrote:
>>
>>> Hi,
>>>
>>> I also want to give a try to OVZ9 , I installed it from
>>> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-383.iso
>>>
>>> although everything went fine , now that I want to configure networking
>>> vlans/bridges , I don't know how to proceed if prlctl package is not
>>> available !? I am also confronted to the GPG key pb [1]
>>>
>>> is prlctl and prlsrvctl still the way to go [2], or deprecated in OVZ9 ?
>>> in that case, how should we configure networking ? following RHEL9 docs ?
>>>
>>> *https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/configuring_and_managing_networking/index#doc-wrapper
>>> <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/configuring_and_managing_networking/index#doc-wrapper>*
>>>
>>> please let us know if you want the community to contribute and testing
>>> OVZ9 .
>>>
>>> thanks
>>>
>>> jehan .
>>>
>>> [1]
>>>
>>>
>>>
>>>
>>> *root@tovz ~]# dnf install prlctl Installing:  prlctl
>>> x86_64   9.0.2-1.vz9  openvz-os  583 k *
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *GPG key at file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9 (0x463278F2)
>>> is already installed The GPG keys listed for the "OpenVZ" repository are
>>> already installed but they are not correct for this package. Check that the
>>> correct key URLs are configured for this repository.. Failing package is:
>>> prlctl-9.0.2-1.vz9.x86_64  GPG Keys are configured as:
>>> file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9 The downloaded packages
>>> were saved in cache until the next successful transaction. You can remove
>>> cached packages by executing 'dnf clean packages'. Error: GPG check FAILED*
>>>
>>> [2]
>>>
>>>
>>> https://docs.virtuozzo.com/virtuozzo_hybrid_server_7_users_guide/managing-network/configuring-virtual-machines-and-containers-in-bridged-mode.html
>>> On 02/02/2023 19:12, jjs - mainphrame wrote:
>>>
>>> Agreed Paulo, virsh always seemed to me a sort of least common
>>> denominator, a dumbed down and reduced capability replacement for the
>>> virtuozzo tools we all know and love.
>>>
>>> Jake
>>>
>>> On Wed, Feb 1, 2023 at 6:58 PM Paulo Coghi - Coghi IT <
>>> pauloco...@gmail.com> wrote:
>>>
>>>> Hello Jake,
>>>>
>>>> Thank you for your valuable feedback! Let's see what the Virtuozzo dev
>>>> team has to say about this issue with the GPG key for the "prlctl" tools.
>>>>
>>>> By the way, last time I tried virsh with OpenVZ (version 8, at the time
>>>> of test), the experience was not good nor well documented as prlctl. But we
>>>> are already receiving the warning about prlctl being deprecated.
>>>>
>>>> There are some niche cases in which virsh doesn't seem capable, like
>>>> setting "cpulimit".
>>>>
>>>>
>>>> Paulo Coghi
>>>>
>>>> On Wed, Feb 1, 2023 at 9:24 PM jjs - mainphrame 
>>>> wrote:
>>>>
>>>>> Everything was looking good

Re: [Users] Status of OVZ 8 & 9

2023-02-03 Thread jjs - mainphrame
I temporarily set gpgcheck=0 in the repo files and was able to install
prlctl, but it was not functional because there are other dependencies that
need to be installed and running.

[root@lavrov ~]# prlctl list
Login failed: Unable to connect to Virtuozzo. You may experience a
connection problem or the server may be down. Contact your Virtuozzo
administrator for assistance.

I'll need to install on a physical box to go any farther..

Jake

On Fri, Feb 3, 2023 at 2:14 AM jehan Procaccia 
wrote:

> Hi,
>
> I also want to give a try to OVZ9 , I installed it from
> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-383.iso
>
> although everything went fine , now that I want to configure networking
> vlans/bridges , I don't know how to proceed if prlctl package is not
> available !? I am also confronted to the GPG key pb [1]
>
> is prlctl and prlsrvctl still the way to go [2], or deprecated in OVZ9 ?
> in that case, how should we configure networking ? following RHEL9 docs ?
>
> *https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/configuring_and_managing_networking/index#doc-wrapper
> <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/configuring_and_managing_networking/index#doc-wrapper>*
>
> please let us know if you want the community to contribute and testing
> OVZ9 .
>
> thanks
>
> jehan .
>
> [1]
>
>
>
>
> *root@tovz ~]# dnf install prlctl Installing:  prlctl
> x86_64   9.0.2-1.vz9  openvz-os  583 k *
>
>
>
>
>
>
>
> *GPG key at file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9 (0x463278F2)
> is already installed The GPG keys listed for the "OpenVZ" repository are
> already installed but they are not correct for this package. Check that the
> correct key URLs are configured for this repository.. Failing package is:
> prlctl-9.0.2-1.vz9.x86_64  GPG Keys are configured as:
> file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9 The downloaded packages
> were saved in cache until the next successful transaction. You can remove
> cached packages by executing 'dnf clean packages'. Error: GPG check FAILED*
>
> [2]
>
>
> https://docs.virtuozzo.com/virtuozzo_hybrid_server_7_users_guide/managing-network/configuring-virtual-machines-and-containers-in-bridged-mode.html
> On 02/02/2023 19:12, jjs - mainphrame wrote:
>
> Agreed Paulo, virsh always seemed to me a sort of least common
> denominator, a dumbed down and reduced capability replacement for the
> virtuozzo tools we all know and love.
>
> Jake
>
> On Wed, Feb 1, 2023 at 6:58 PM Paulo Coghi - Coghi IT <
> pauloco...@gmail.com> wrote:
>
>> Hello Jake,
>>
>> Thank you for your valuable feedback! Let's see what the Virtuozzo dev
>> team has to say about this issue with the GPG key for the "prlctl" tools.
>>
>> By the way, last time I tried virsh with OpenVZ (version 8, at the time
>> of test), the experience was not good nor well documented as prlctl. But we
>> are already receiving the warning about prlctl being deprecated.
>>
>> There are some niche cases in which virsh doesn't seem capable, like
>> setting "cpulimit".
>>
>>
>> Paulo Coghi
>>
>> On Wed, Feb 1, 2023 at 9:24 PM jjs - mainphrame 
>> wrote:
>>
>>> Everything was looking good, and I was considering installing ovz 9 on a
>>> physical server, but I ran into a weird issue with the GPG keys when I
>>> tried to install prlctl:
>>>
>>> GPG key at file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9 (0x463278F2)
>>> is already installed
>>> The GPG keys listed for the "OpenVZ" repository are already installed
>>> but they are not correct for this package.
>>> Check that the correct key URLs are configured for this repository..
>>> Failing package is: prlctl-9.0.2-1.vz9.x86_64
>>>  GPG Keys are configured as:
>>> file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9
>>> The downloaded packages were saved in cache until the next successful
>>> transaction.
>>> You can remove cached packages by executing 'yum clean packages'.
>>> Error: GPG check FAILED
>>>
>>> Jake
>>>
>>> On Wed, Feb 1, 2023 at 9:21 AM jjs - mainphrame 
>>> wrote:
>>>
>>>> The iso is indeed a new image.
>>>>
>>>> I've installed it in a VM and have been poking around, looks promising
>>>> so far, creating a few containers and taking them for a spin.
>>>>
>>>> Jake
>>>>
>>>> On Tue, Jan 31, 2023 at 7:21 PM jjs - mainphrame 
>

Re: [Users] Status of OVZ 8 & 9

2023-02-02 Thread jjs - mainphrame
Agreed Paulo, virsh always seemed to me a sort of least common denominator,
a dumbed down and reduced capability replacement for the virtuozzo tools we
all know and love.

Jake

On Wed, Feb 1, 2023 at 6:58 PM Paulo Coghi - Coghi IT 
wrote:

> Hello Jake,
>
> Thank you for your valuable feedback! Let's see what the Virtuozzo dev
> team has to say about this issue with the GPG key for the "prlctl" tools.
>
> By the way, last time I tried virsh with OpenVZ (version 8, at the time of
> test), the experience was not good nor well documented as prlctl. But we
> are already receiving the warning about prlctl being deprecated.
>
> There are some niche cases in which virsh doesn't seem capable, like
> setting "cpulimit".
>
>
> Paulo Coghi
>
> On Wed, Feb 1, 2023 at 9:24 PM jjs - mainphrame 
> wrote:
>
>> Everything was looking good, and I was considering installing ovz 9 on a
>> physical server, but I ran into a weird issue with the GPG keys when I
>> tried to install prlctl:
>>
>> GPG key at file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9 (0x463278F2)
>> is already installed
>> The GPG keys listed for the "OpenVZ" repository are already installed but
>> they are not correct for this package.
>> Check that the correct key URLs are configured for this repository..
>> Failing package is: prlctl-9.0.2-1.vz9.x86_64
>>  GPG Keys are configured as:
>> file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9
>> The downloaded packages were saved in cache until the next successful
>> transaction.
>> You can remove cached packages by executing 'yum clean packages'.
>> Error: GPG check FAILED
>>
>> Jake
>>
>> On Wed, Feb 1, 2023 at 9:21 AM jjs - mainphrame 
>> wrote:
>>
>>> The iso is indeed a new image.
>>>
>>> I've installed it in a VM and have been poking around, looks promising
>>> so far, creating a few containers and taking them for a spin.
>>>
>>> Jake
>>>
>>> On Tue, Jan 31, 2023 at 7:21 PM jjs - mainphrame 
>>> wrote:
>>>
>>>> Downloading, will investigate.
>>>>
>>>> Jake
>>>>
>>>> On Tue, Jan 31, 2023 at 4:11 PM Paulo Coghi - Coghi IT <
>>>> pauloco...@gmail.com> wrote:
>>>>
>>>>> What about this one, dated 27-Jan-2023?
>>>>>
>>>>>
>>>>> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-383.iso
>>>>>
>>>>> Paulo Coghi
>>>>>
>>>>> On Tue, Jan 31, 2023 at 10:07 PM jjs - mainphrame 
>>>>> wrote:
>>>>>
>>>>>> I downloaded the vz9.iso and mounted it, and all the files are dated
>>>>>> Feb 2 2022.
>>>>>>
>>>>>> So, no joy, despite the deceptive Dec 2022 date on the iso.
>>>>>>
>>>>>> Jake
>>>>>>
>>>>>> On Tue, Jan 31, 2023 at 7:38 AM jehan Procaccia <
>>>>>> jehan.procac...@imtbs-tsp.eu> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> actually I wonder if openvz-iso-9:
>>>>>>>
>>>>>>> *https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/
>>>>>>> <https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/>*
>>>>>>>
>>>>>>> *openvz-iso-9.0.0.iso24-Feb-2022 04:41 2.9G*
>>>>>>>
>>>>>>> which is supposed to be the base reference for virtuozzo 9 ,
>>>>>>>
>>>>>>> is the same as
>>>>>>>
>>>>>>> http://repo.virtuozzo.com/vz/releases/
>>>>>>>
>>>>>>> *vz9.iso   20-Dec-2022 12:31  2G*
>>>>>>>
>>>>>>> please let us know which .iso we should start with to test vz9 (open
>>>>>>> version)
>>>>>>>
>>>>>>> why haven't they the same date (Feb 2022 vs Dec 2022)
>>>>>>>
>>>>>>> Thanks .
>>>>>>>
>>>>>> ___
>>>>> Users mailing list
>>>>> Users@openvz.org
>>>>> https://lists.openvz.org/mailman/listinfo/users
>>>>>
>>>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-02-01 Thread jjs - mainphrame
Everything was looking good, and I was considering installing ovz 9 on a
physical server, but I ran into a weird issue with the GPG keys when I
tried to install prlctl:

GPG key at file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9 (0x463278F2) is
already installed
The GPG keys listed for the "OpenVZ" repository are already installed but
they are not correct for this package.
Check that the correct key URLs are configured for this repository..
Failing package is: prlctl-9.0.2-1.vz9.x86_64
 GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9
The downloaded packages were saved in cache until the next successful
transaction.
You can remove cached packages by executing 'yum clean packages'.
Error: GPG check FAILED

Jake

On Wed, Feb 1, 2023 at 9:21 AM jjs - mainphrame  wrote:

> The iso is indeed a new image.
>
> I've installed it in a VM and have been poking around, looks promising so
> far, creating a few containers and taking them for a spin.
>
> Jake
>
> On Tue, Jan 31, 2023 at 7:21 PM jjs - mainphrame 
> wrote:
>
>> Downloading, will investigate.
>>
>> Jake
>>
>> On Tue, Jan 31, 2023 at 4:11 PM Paulo Coghi - Coghi IT <
>> pauloco...@gmail.com> wrote:
>>
>>> What about this one, dated 27-Jan-2023?
>>>
>>>
>>> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-383.iso
>>>
>>> Paulo Coghi
>>>
>>> On Tue, Jan 31, 2023 at 10:07 PM jjs - mainphrame 
>>> wrote:
>>>
>>>> I downloaded the vz9.iso and mounted it, and all the files are dated
>>>> Feb 2 2022.
>>>>
>>>> So, no joy, despite the deceptive Dec 2022 date on the iso.
>>>>
>>>> Jake
>>>>
>>>> On Tue, Jan 31, 2023 at 7:38 AM jehan Procaccia <
>>>> jehan.procac...@imtbs-tsp.eu> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> actually I wonder if openvz-iso-9:
>>>>>
>>>>> *https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/
>>>>> <https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/>*
>>>>>
>>>>> *openvz-iso-9.0.0.iso24-Feb-2022 04:41 2.9G*
>>>>>
>>>>> which is supposed to be the base reference for virtuozzo 9 ,
>>>>>
>>>>> is the same as
>>>>>
>>>>> http://repo.virtuozzo.com/vz/releases/
>>>>>
>>>>> *vz9.iso   20-Dec-2022 12:31  2G*
>>>>>
>>>>> please let us know which .iso we should start with to test vz9 (open
>>>>> version)
>>>>>
>>>>> why haven't they the same date (Feb 2022 vs Dec 2022)
>>>>>
>>>>> Thanks .
>>>>>
>>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-02-01 Thread jjs - mainphrame
The iso is indeed a new image.

I've installed it in a VM and have been poking around, looks promising so
far, creating a few containers and taking them for a spin.

Jake

On Tue, Jan 31, 2023 at 7:21 PM jjs - mainphrame  wrote:

> Downloading, will investigate.
>
> Jake
>
> On Tue, Jan 31, 2023 at 4:11 PM Paulo Coghi - Coghi IT <
> pauloco...@gmail.com> wrote:
>
>> What about this one, dated 27-Jan-2023?
>>
>>
>> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-383.iso
>>
>> Paulo Coghi
>>
>> On Tue, Jan 31, 2023 at 10:07 PM jjs - mainphrame 
>> wrote:
>>
>>> I downloaded the vz9.iso and mounted it, and all the files are dated Feb
>>> 2 2022.
>>>
>>> So, no joy, despite the deceptive Dec 2022 date on the iso.
>>>
>>> Jake
>>>
>>> On Tue, Jan 31, 2023 at 7:38 AM jehan Procaccia <
>>> jehan.procac...@imtbs-tsp.eu> wrote:
>>>
>>>> Hi,
>>>>
>>>> actually I wonder if openvz-iso-9:
>>>>
>>>> *https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/
>>>> <https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/>*
>>>>
>>>> *openvz-iso-9.0.0.iso24-Feb-2022 04:41 2.9G*
>>>>
>>>> which is supposed to be the base reference for virtuozzo 9 ,
>>>>
>>>> is the same as
>>>>
>>>> http://repo.virtuozzo.com/vz/releases/
>>>>
>>>> *vz9.iso   20-Dec-2022 12:31  2G*
>>>>
>>>> please let us know which .iso we should start with to test vz9 (open
>>>> version)
>>>>
>>>> why haven't they the same date (Feb 2022 vs Dec 2022)
>>>>
>>>> Thanks .
>>>>
>>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-01-31 Thread jjs - mainphrame
Downloading, will investigate.

Jake

On Tue, Jan 31, 2023 at 4:11 PM Paulo Coghi - Coghi IT 
wrote:

> What about this one, dated 27-Jan-2023?
>
>
> https://download.openvz.org/virtuozzo/factory9/x86_64/iso/openvz-iso-9.0.1-383.iso
>
> Paulo Coghi
>
> On Tue, Jan 31, 2023 at 10:07 PM jjs - mainphrame 
> wrote:
>
>> I downloaded the vz9.iso and mounted it, and all the files are dated Feb
>> 2 2022.
>>
>> So, no joy, despite the deceptive Dec 2022 date on the iso.
>>
>> Jake
>>
>> On Tue, Jan 31, 2023 at 7:38 AM jehan Procaccia <
>> jehan.procac...@imtbs-tsp.eu> wrote:
>>
>>> Hi,
>>>
>>> actually I wonder if openvz-iso-9:
>>>
>>> *https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/
>>> <https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/>*
>>>
>>> *openvz-iso-9.0.0.iso24-Feb-2022 04:41 2.9G*
>>>
>>> which is supposed to be the base reference for virtuozzo 9 ,
>>>
>>> is the same as
>>>
>>> http://repo.virtuozzo.com/vz/releases/
>>>
>>> *vz9.iso   20-Dec-2022 12:31  2G*
>>>
>>> please let us know which .iso we should start with to test vz9 (open
>>> version)
>>>
>>> why haven't they the same date (Feb 2022 vs Dec 2022)
>>>
>>> Thanks .
>>>
>> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-01-31 Thread jjs - mainphrame
I downloaded the vz9.iso and mounted it, and all the files are dated Feb 2
2022.

So, no joy, despite the deceptive Dec 2022 date on the iso.

Jake

On Tue, Jan 31, 2023 at 7:38 AM jehan Procaccia <
jehan.procac...@imtbs-tsp.eu> wrote:

> Hi,
>
> actually I wonder if openvz-iso-9:
>
> *https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/
> <https://download.openvz.org/virtuozzo/releases/9.0/x86_64/iso/>*
>
> *openvz-iso-9.0.0.iso24-Feb-2022 04:41 2.9G*
>
> which is supposed to be the base reference for virtuozzo 9 ,
>
> is the same as
>
> http://repo.virtuozzo.com/vz/releases/
>
> *vz9.iso   20-Dec-2022 12:31  2G*
>
> please let us know which .iso we should start with to test vz9 (open
> version)
>
> why haven't they the same date (Feb 2022 vs Dec 2022)
>
> Thanks .
> On 28/01/2023 11:21, jehan Procaccia wrote:
>
> I hope I am not wrong to disagree regarding your skepticism for future of
> openVZ (7,8,9 ...)
>
> check in that same thread discussion :
>
> https://marc.info/?l=openvz-users=167080032829556=2
>
> I confirm that running openVZ 7 is rock solid, and we do use lots of CTs,
> for me the best featured containers  solution
>
> we are expecting a  continuation with VZ 9 (as 8 might be skipped). as a
> public accademic school we appreciate free and openSource software
>
> but we keep purchase few virtuozzo commercial licences as much as we can
> to contribute to the project .
>
> as long as this:
> https://docs.virtuozzo.com/virtuozzo_hybrid_server_7_users_guide/learning-basics/vhs-vs-openvz.html
>
> *OpenVZ <https://openvz.org/> is a free, open-source virtualization
> solution available under GNU GPL. OpenVZ is the base for Virtuozzo Hybrid
> Server, the commercial solution that builds on OpenVZ and offers additional
> benefits to customers.*
>
> is still true, if Virtuozzo Hybrid Server is based on the openSource
> OpenVZ, I don't see any fears regarding its future .
>
> https://docs.virtuozzo.com/virtuozzo_product_lifecycle_policy/index.html
>
> but indeed, we are waiting for a VZ 9 which maybe lacks of a clearer
> raodmap, for exemple, is it still as much an Ahpha release (not to use in
> production) as mentioned here (Feb 2022)
>
>
> https://www.virtuozzo.com/company/blog/product-updates/virtuozzo-hybrid-server-9-alpha-2/
>
> or http://repo.virtuozzo.com/vz/releases/
>
> *vz9.iso   20-Dec-2022 12:31  2G*
> http://repo.virtuozzo.com/vz/releases/
>
> *vz9.iso   20-Dec-2022 12:31  2G*
>
> which seems quite recent, is in a far better state now .
>
> Thanks .
>
> jehan .
>
>
> On 27/01/2023 21:16, Gena Makhomed wrote:
>
> OpenVZ 6 is last fully functional version, running on top of CentOS.
>
> OpenVZ 7, 8, 9 ...
>
> May be better to use just virtual machines using QEMU-KVM and libvirt ?
>
> This solution is very stable, very feature rich and very useful.
>
> If you need to use very cheap virtual machines - try to use
> https://firecracker-microvm.github.io/
>
> Or you can combine Firecracker MicroVMs with Docker / OCI images to unify
> containers and VMs: https://github.com/weaveworks/ignite
>
> Stop to use OpenVZ, because OpenVZ 6 is End Of Life and now it is dead
> project.
>
> OpenVZ 6 is just last true and fully functional OpenVZ version.
>
> Something named OpenVZ 7, OpenVZ 8, OpenVZ 9 ... is just agony of OpenVZ
> project.
>
> On 09.12.2022 2:16, jjs - mainphrame wrote:
>
> I've been running openvz 7 for some years, and I periodically check on the
> status of openvz 8 and 9.
>
> While openvz 7 has been getting updates, it seems openvz 8 is fairly
> static, and openvz 9 seems not ready for use.
>
> Is there an intent to continue support of openvz beyond version 7?
>
> Since openvz is a great advertisement for virtuozzo, it would be a shame
> if
> it faded away.
>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2023-01-27 Thread jjs - mainphrame
Thanks for the tip, but I find openvz 7 to be a solid platform. My only
gripe is the lack of a web management console, such as was available for
openvz in the past, or what proxmox comes with.

While proxmox has a great web interface and thus less of a learning curve,
the openvz containers are more reliable and I prefer them for that reason.

If a good web interface, like that of proxmox, were to become available for
OVZ 9, I would be quite pleased.

Jake

On Fri, Jan 27, 2023 at 12:19 PM Gena Makhomed  wrote:

> OpenVZ 6 is last fully functional version, running on top of CentOS.
>
> OpenVZ 7, 8, 9 ...
>
> May be better to use just virtual machines using QEMU-KVM and libvirt ?
>
> This solution is very stable, very feature rich and very useful.
>
> If you need to use very cheap virtual machines - try to use
> https://firecracker-microvm.github.io/
>
> Or you can combine Firecracker MicroVMs with Docker / OCI images to
> unify containers and VMs: https://github.com/weaveworks/ignite
>
> Stop to use OpenVZ, because OpenVZ 6 is End Of Life and now it is dead
> project.
>
> OpenVZ 6 is just last true and fully functional OpenVZ version.
>
> Something named OpenVZ 7, OpenVZ 8, OpenVZ 9 ... is just agony of OpenVZ
> project.
>
> On 09.12.2022 2:16, jjs - mainphrame wrote:
> > I've been running openvz 7 for some years, and I periodically check on
> the
> > status of openvz 8 and 9.
> >
> > While openvz 7 has been getting updates, it seems openvz 8 is fairly
> > static, and openvz 9 seems not ready for use.
> >
> > Is there an intent to continue support of openvz beyond version 7?
> >
> > Since openvz is a great advertisement for virtuozzo, it would be a shame
> if
> > it faded away.
>
> --
> Best regards,
>   Gena
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Multicast issue with openvz VMs and CTs

2023-01-14 Thread jjs - mainphrame
I find it a bit puzzling that keepalived works while ucarp doesn't, since
they both use vrrp multicast (224.0.0.18)

Maybe some rainy day I'll try to figure out why vrrp mc from keepalived
gets through but vrrp mc from ucarp doesn't, but for now I'm happy to have
something that works.

Jake

On Sat, Jan 14, 2023 at 3:16 PM jjs - mainphrame  wrote:

> Success -
>
> I've found that keepalived, which is in the standard debian repos, works
> well to provide a highly available virtual IP using openvz containers, and
> is easy to set up.
>
> I'll now move my default gateway IP from the proxmox CTs to the openvz
> CTs, and breathe a sigh of relief.
>
> Jake
>
>
>
>
>
> On Sun, Jan 8, 2023 at 5:08 PM jjs - mainphrame 
> wrote:
>
>> I've been doing some testing with ucarp, in debian VMs and containers.
>>
>> (ucarp is an implementation of VRRP, a means of providing a highly
>> available floating virtual IP within a cluster of machines)
>>
>> It works fine on proxmox VMs and CTs, and I was hoping to get it working
>> on openvz, but so far my attempts to get it fully up and running have
>> failed.
>>
>> Basically, both nodes become master, because neither node is seeing the
>> multicast traffic from the other.
>>
>> What is the secret of allowing multicast traffic to pass from openvz VMs
>> and CTs, onto the lan?
>>
>> Jake
>>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Multicast issue with openvz VMs and CTs

2023-01-14 Thread jjs - mainphrame
Success -

I've found that keepalived, which is in the standard debian repos, works
well to provide a highly available virtual IP using openvz containers, and
is easy to set up.

I'll now move my default gateway IP from the proxmox CTs to the openvz CTs,
and breathe a sigh of relief.

Jake





On Sun, Jan 8, 2023 at 5:08 PM jjs - mainphrame  wrote:

> I've been doing some testing with ucarp, in debian VMs and containers.
>
> (ucarp is an implementation of VRRP, a means of providing a highly
> available floating virtual IP within a cluster of machines)
>
> It works fine on proxmox VMs and CTs, and I was hoping to get it working
> on openvz, but so far my attempts to get it fully up and running have
> failed.
>
> Basically, both nodes become master, because neither node is seeing the
> multicast traffic from the other.
>
> What is the secret of allowing multicast traffic to pass from openvz VMs
> and CTs, onto the lan?
>
> Jake
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Multicast issue with openvz VMs and CTs

2023-01-14 Thread jjs - mainphrame
I'm not feeling as much pressure to fix this at the moment, since I have a
workaround in place (ucarp running on a pair of promox/lxc containers on 2
different hosts) but ultimately I want to move this to openvz containers,
as in my experience they have been bulletproof, running literally for
years. In contrast, I've seen lxc containers mysteriously become corrupted
and inoperative, and even since standing up the proxmox boxes, I've seen
lxc containers suddenly unable to accept ssh connections, and need to be
restarted.

Jake



On Wed, Jan 11, 2023 at 12:30 AM Paulo Coghi - Coghi IT <
pauloco...@gmail.com> wrote:

> Thanks Jake for all the valuable information.
>
> I'm following every email.
>
> On Tue, Jan 10, 2023 at 5:05 PM jjs - mainphrame 
> wrote:
>
>> To clarify, the openvz guests can receive multicast traffic from the lan,
>> but they are unable to send multicast traffic to the lan. The multicast
>> packets are dropped on the way out, somewhere between the guest adapter and
>> the host bridge.
>>
>> I'm not seeing any differences in sysctl settings between the ovz hosts
>> and the working hosts, so firewall rules are the likely culprit.
>>
>> I'll continue to chip away at this as time allows and update with any
>> findings.
>>
>> Jake
>>
>>
>> On Sun, Jan 8, 2023 at 5:08 PM jjs - mainphrame 
>> wrote:
>>
>>> I've been doing some testing with ucarp, in debian VMs and containers.
>>>
>>> (ucarp is an implementation of VRRP, a means of providing a highly
>>> available floating virtual IP within a cluster of machines)
>>>
>>> It works fine on proxmox VMs and CTs, and I was hoping to get it working
>>> on openvz, but so far my attempts to get it fully up and running have
>>> failed.
>>>
>>> Basically, both nodes become master, because neither node is seeing the
>>> multicast traffic from the other.
>>>
>>> What is the secret of allowing multicast traffic to pass from openvz VMs
>>> and CTs, onto the lan?
>>>
>>> Jake
>>>
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Multicast issue with openvz VMs and CTs

2023-01-14 Thread jjs - mainphrame
Katran looks interesting, but a little more involved than carp/vrrp.

I'll take a look at this when I get a chance.

Jake

On Wed, Jan 11, 2023 at 12:34 AM Paulo Coghi - Coghi IT <
pauloco...@gmail.com> wrote:

> Jake, does an L4 load balancer solve the problem?
>
> If yes, maybe Katran could help:
> https://github.com/facebookincubator/katran
>
> On Wed, Jan 11, 2023 at 9:28 AM Paulo Coghi - Coghi IT <
> pauloco...@gmail.com> wrote:
>
>> Thanks Jake for all the valuable information.
>>
>> I'm following every email.
>>
>> On Tue, Jan 10, 2023 at 5:05 PM jjs - mainphrame 
>> wrote:
>>
>>> To clarify, the openvz guests can receive multicast traffic from the
>>> lan, but they are unable to send multicast traffic to the lan. The
>>> multicast packets are dropped on the way out, somewhere between the guest
>>> adapter and the host bridge.
>>>
>>> I'm not seeing any differences in sysctl settings between the ovz hosts
>>> and the working hosts, so firewall rules are the likely culprit.
>>>
>>> I'll continue to chip away at this as time allows and update with any
>>> findings.
>>>
>>> Jake
>>>
>>>
>>> On Sun, Jan 8, 2023 at 5:08 PM jjs - mainphrame 
>>> wrote:
>>>
>>>> I've been doing some testing with ucarp, in debian VMs and containers.
>>>>
>>>> (ucarp is an implementation of VRRP, a means of providing a highly
>>>> available floating virtual IP within a cluster of machines)
>>>>
>>>> It works fine on proxmox VMs and CTs, and I was hoping to get it
>>>> working on openvz, but so far my attempts to get it fully up and running
>>>> have failed.
>>>>
>>>> Basically, both nodes become master, because neither node is seeing the
>>>> multicast traffic from the other.
>>>>
>>>> What is the secret of allowing multicast traffic to pass from openvz
>>>> VMs and CTs, onto the lan?
>>>>
>>>> Jake
>>>>
>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Multicast issue with openvz VMs and CTs

2023-01-10 Thread jjs - mainphrame
I've been doing some testing with ucarp, in debian VMs and containers.

(ucarp is an implementation of VRRP, a means of providing a highly
available floating virtual IP within a cluster of machines)

It works fine on proxmox VMs and CTs, and I was hoping to get it working on
openvz, but so far my attempts to get it fully up and running have failed.

Basically, both nodes become master, because neither node is seeing the
multicast traffic from the other.

What is the secret of allowing multicast traffic to pass from openvz VMs
and CTs, onto the lan?

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Multicast issue with openvz VMs and CTs

2023-01-10 Thread jjs - mainphrame
To clarify, the openvz guests can receive multicast traffic from the lan,
but they are unable to send multicast traffic to the lan. The multicast
packets are dropped on the way out, somewhere between the guest adapter and
the host bridge.

I'm not seeing any differences in sysctl settings between the ovz hosts and
the working hosts, so firewall rules are the likely culprit.

I'll continue to chip away at this as time allows and update with any
findings.

Jake


On Sun, Jan 8, 2023 at 5:08 PM jjs - mainphrame  wrote:

> I've been doing some testing with ucarp, in debian VMs and containers.
>
> (ucarp is an implementation of VRRP, a means of providing a highly
> available floating virtual IP within a cluster of machines)
>
> It works fine on proxmox VMs and CTs, and I was hoping to get it working
> on openvz, but so far my attempts to get it fully up and running have
> failed.
>
> Basically, both nodes become master, because neither node is seeing the
> multicast traffic from the other.
>
> What is the secret of allowing multicast traffic to pass from openvz VMs
> and CTs, onto the lan?
>
> Jake
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Solved: network manager is the culprit (was: OVZ 7 network issues...)

2023-01-03 Thread jjs - mainphrame
Update -

It appears to be fairly straightforward to enable forwarding with firewalld
in RHEL 9, so hopefully a plain vanilla openvz 9 will be able to do
everything we need to do.

Looking forward to OVZ 9, will be happy to beta test

Jake



On Sat, Dec 31, 2022 at 2:31 PM jjs - mainphrame  wrote:

> Curious side note -
>
> Replacing firewalld with ufw allows traffic to be forwarded by the
> container. I gather that firewalld was designed for a workstation or a
> laptop, not a multi-homed server. That, and the version of firewalld on
> RHEL 7 is rather old.
>
> In any case, all of the "direct" firewalld rules that should have done the
> trick haven't done so.
>
> I'm curious to see if the newer version of firewalld that comes with RHEL
> 9 will be any more cooperative on this front.
>
> Jake
>
> On Mon, Dec 26, 2022 at 12:36 PM jjs - mainphrame 
> wrote:
>
>> Greetings -
>>
>> It's 2022, I have a new set of openvz servers, and once again, disabling
>> NetworkManager was the key to solving a nagging, intermittent network
>> problem.
>>
>> I recently added interfaces to the openvz servers to connect them to
>> other networks, and while everything seemed to work initially, the bridges
>> kept going down mysteriously.
>>
>> I tried removing and rebuilding the bridges, the networks, the virtual
>> interfaces, seeing the bridge come up, and then syslog showing
>> NetworkManager was taking the bridge down for vague and inscrutable
>> reasons, I disabled NetworkManager, and voila. No more bridge problems.
>>
>> It seems to me that NetworkManager doesn't add any value for a server,
>> only additional failure modes. Perhaps it's a solution in search of a
>> problem, as they say?
>>
>> Jake
>>
>> On Thu, Dec 31, 2015 at 1:15 PM jjs - mainphrame 
>> wrote:
>>
>>> Greetings -
>>>
>>> It's been 2 weeks of flawless network connectivity to all containers
>>> after completely removing network manager, even after attempts to induce
>>> the sort of failure that were seen when network manager was running,.
>>>
>>> At this point, it's clear that network manager was the problem, and the
>>> old saw is confirmed: "simple is safe".
>>>
>>> Happy new year to all who were following this question.
>>>
>>> Joe
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Dec 14, 2015 at 9:51 AM, Konstantin Bukharov 
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> From symptoms, it looks like ARP cache expiration issue.
>>>>
>>>> Please check ARP cache settings on your network equipment.
>>>>
>>>>
>>>>
>>>> We haven’t seen such massive reports.
>>>>
>>>>
>>>>
>>>> Best regards,
>>>>
>>>>
>>>>
>>>> *From:* users-boun...@openvz.org [mailto:users-boun...@openvz.org] *On
>>>> Behalf Of *jjs - mainphrame
>>>> *Sent:* Saturday, December 12, 2015 6:18
>>>> *To:* users@openvz.org
>>>> *Subject:* [Users] OVZ 7 network issues (vz host stops answering arp
>>>> requests)
>>>>
>>>>
>>>>
>>>> Greetings,
>>>>
>>>>
>>>>
>>>> I've been running some servers in OVZ 7 containers for some months now,
>>>> and I'm happy with reliability and performance, with the exception of an
>>>> occasional loss of ct network connectivity.
>>>>
>>>> From time to time, I'll get a xymon alert that all containers are
>>>> unreachable. The cts are all using host routing. What I see, when I examine
>>>> any affected ct, is this:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> The container no longer responds to ping, even from the local lan.
>>>>
>>>> The vz host and container can connect to each other
>>>>
>>>> The container can not reach anything beyond the host.
>>>>
>>>>
>>>>
>>>> When I enter the container and ping another box, the pings are
>>>> received, but the box can not return the pings as the arp request goes
>>>> unanswered. For some reason the vz host forgets about the cts, and it's
>>>> always all of them. This is 2 Centos 7 boxes, running OVZ 7 beta and OVZ 7
>>>> factory
>>>>
>>>>
>>>>
>>>> Doing a vzctl restart on each affected ct restores connectivity. Has
>>>> anyone else seen this issue?
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@openvz.org
>>>> https://lists.openvz.org/mailman/listinfo/users
>>>>
>>>>
>>>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Solved: network manager is the culprit (was: OVZ 7 network issues...)

2022-12-31 Thread jjs - mainphrame
Curious side note -

Replacing firewalld with ufw allows traffic to be forwarded by the
container. I gather that firewalld was designed for a workstation or a
laptop, not a multi-homed server. That, and the version of firewalld on
RHEL 7 is rather old.

In any case, all of the "direct" firewalld rules that should have done the
trick haven't done so.

I'm curious to see if the newer version of firewalld that comes with RHEL 9
will be any more cooperative on this front.

Jake

On Mon, Dec 26, 2022 at 12:36 PM jjs - mainphrame 
wrote:

> Greetings -
>
> It's 2022, I have a new set of openvz servers, and once again, disabling
> NetworkManager was the key to solving a nagging, intermittent network
> problem.
>
> I recently added interfaces to the openvz servers to connect them to other
> networks, and while everything seemed to work initially, the bridges kept
> going down mysteriously.
>
> I tried removing and rebuilding the bridges, the networks, the virtual
> interfaces, seeing the bridge come up, and then syslog showing
> NetworkManager was taking the bridge down for vague and inscrutable
> reasons, I disabled NetworkManager, and voila. No more bridge problems.
>
> It seems to me that NetworkManager doesn't add any value for a server,
> only additional failure modes. Perhaps it's a solution in search of a
> problem, as they say?
>
> Jake
>
> On Thu, Dec 31, 2015 at 1:15 PM jjs - mainphrame 
> wrote:
>
>> Greetings -
>>
>> It's been 2 weeks of flawless network connectivity to all containers
>> after completely removing network manager, even after attempts to induce
>> the sort of failure that were seen when network manager was running,.
>>
>> At this point, it's clear that network manager was the problem, and the
>> old saw is confirmed: "simple is safe".
>>
>> Happy new year to all who were following this question.
>>
>> Joe
>>
>>
>>
>>
>>
>> On Mon, Dec 14, 2015 at 9:51 AM, Konstantin Bukharov 
>> wrote:
>>
>>> Hello,
>>>
>>>
>>>
>>> From symptoms, it looks like ARP cache expiration issue.
>>>
>>> Please check ARP cache settings on your network equipment.
>>>
>>>
>>>
>>> We haven’t seen such massive reports.
>>>
>>>
>>>
>>> Best regards,
>>>
>>>
>>>
>>> *From:* users-boun...@openvz.org [mailto:users-boun...@openvz.org] *On
>>> Behalf Of *jjs - mainphrame
>>> *Sent:* Saturday, December 12, 2015 6:18
>>> *To:* users@openvz.org
>>> *Subject:* [Users] OVZ 7 network issues (vz host stops answering arp
>>> requests)
>>>
>>>
>>>
>>> Greetings,
>>>
>>>
>>>
>>> I've been running some servers in OVZ 7 containers for some months now,
>>> and I'm happy with reliability and performance, with the exception of an
>>> occasional loss of ct network connectivity.
>>>
>>> From time to time, I'll get a xymon alert that all containers are
>>> unreachable. The cts are all using host routing. What I see, when I examine
>>> any affected ct, is this:
>>>
>>>
>>>
>>>
>>>
>>> The container no longer responds to ping, even from the local lan.
>>>
>>> The vz host and container can connect to each other
>>>
>>> The container can not reach anything beyond the host.
>>>
>>>
>>>
>>> When I enter the container and ping another box, the pings are received,
>>> but the box can not return the pings as the arp request goes unanswered.
>>> For some reason the vz host forgets about the cts, and it's always all of
>>> them. This is 2 Centos 7 boxes, running OVZ 7 beta and OVZ 7 factory
>>>
>>>
>>>
>>> Doing a vzctl restart on each affected ct restores connectivity. Has
>>> anyone else seen this issue?
>>>
>>>
>>>
>>> Regards,
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>>>
>>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Has anyone used an openvz container as a gateway?

2022-12-30 Thread jjs - mainphrame
Answering my own question for future inquirers -

After successfully testing an lxc container as a NAT gateway, I resumed
testing on openvz. I remembered there was some sort of setting to enable
iptables in a container, and eventually found it:

# prlctl set MyCT --netfilter full

Of course, fighting with firewalld is a whole different set of problems,
but with firewalld off, it works perfectly. I'll either find the magic
tweak that makes firewalld allow the forwarding, or I'll live
without firewalld for now.

Jake


On Tue, Dec 20, 2022 at 2:21 PM jjs - mainphrame  wrote:

> I've been on a hardware consolidation and virtualization kick, and have
> been converting physical hosts in the office to openvz VMs.
>
> I have a couple of physical boxes each connecting to an internet provider,
> and acting as a firewall/gateway, among other things. I was able to convert
> these to VMs, after adding the interfaces and creating the bridges and
> networks, and it works as expected.
>
> I thought it would be more efficient to use a container, and have been
> testing with a container connected to an internal bridge, and an external
> bridge. I haven't yet been able to figure out why it won't forward traffic
> from the internal interface to the external interface, even though it's
> connected to the same networks as the VM which is successfully doing so.
>
> Is it possible to use a container for this, or am I trying to make a
> container do something it was designed not to do?
>
> Jake
>
>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Solved: network manager is the culprit (was: OVZ 7 network issues...)

2022-12-26 Thread jjs - mainphrame
Greetings -

It's 2022, I have a new set of openvz servers, and once again, disabling
NetworkManager was the key to solving a nagging, intermittent network
problem.

I recently added interfaces to the openvz servers to connect them to other
networks, and while everything seemed to work initially, the bridges kept
going down mysteriously.

I tried removing and rebuilding the bridges, the networks, the virtual
interfaces, seeing the bridge come up, and then syslog showing
NetworkManager was taking the bridge down for vague and inscrutable
reasons, I disabled NetworkManager, and voila. No more bridge problems.

It seems to me that NetworkManager doesn't add any value for a server, only
additional failure modes. Perhaps it's a solution in search of a problem,
as they say?

Jake

On Thu, Dec 31, 2015 at 1:15 PM jjs - mainphrame  wrote:

> Greetings -
>
> It's been 2 weeks of flawless network connectivity to all containers after
> completely removing network manager, even after attempts to induce the sort
> of failure that were seen when network manager was running,.
>
> At this point, it's clear that network manager was the problem, and the
> old saw is confirmed: "simple is safe".
>
> Happy new year to all who were following this question.
>
> Joe
>
>
>
>
>
> On Mon, Dec 14, 2015 at 9:51 AM, Konstantin Bukharov 
> wrote:
>
>> Hello,
>>
>>
>>
>> From symptoms, it looks like ARP cache expiration issue.
>>
>> Please check ARP cache settings on your network equipment.
>>
>>
>>
>> We haven’t seen such massive reports.
>>
>>
>>
>> Best regards,
>>
>>
>>
>> *From:* users-boun...@openvz.org [mailto:users-boun...@openvz.org] *On
>> Behalf Of *jjs - mainphrame
>> *Sent:* Saturday, December 12, 2015 6:18
>> *To:* users@openvz.org
>> *Subject:* [Users] OVZ 7 network issues (vz host stops answering arp
>> requests)
>>
>>
>>
>> Greetings,
>>
>>
>>
>> I've been running some servers in OVZ 7 containers for some months now,
>> and I'm happy with reliability and performance, with the exception of an
>> occasional loss of ct network connectivity.
>>
>> From time to time, I'll get a xymon alert that all containers are
>> unreachable. The cts are all using host routing. What I see, when I examine
>> any affected ct, is this:
>>
>>
>>
>>
>>
>> The container no longer responds to ping, even from the local lan.
>>
>> The vz host and container can connect to each other
>>
>> The container can not reach anything beyond the host.
>>
>>
>>
>> When I enter the container and ping another box, the pings are received,
>> but the box can not return the pings as the arp request goes unanswered.
>> For some reason the vz host forgets about the cts, and it's always all of
>> them. This is 2 Centos 7 boxes, running OVZ 7 beta and OVZ 7 factory
>>
>>
>>
>> Doing a vzctl restart on each affected ct restores connectivity. Has
>> anyone else seen this issue?
>>
>>
>>
>> Regards,
>>
>>
>>
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Migration of VMs between openvz hosts

2022-12-20 Thread jjs - mainphrame
I have 3 openvz hosts, all with slightly differing Intel x86_64 chipsets.

I've noticed that VMs created on the newest node refuse to migrate to the
others:

[root@hachi ~]# prlctl migrate pavel lindell
WARNING: You are using a deprecated CLI component that won't be installed
by default in the next major release. Please use virsh instead
Migrate the VM pavel on lindell  ()
Operation progress100%
Operation progress ... 0%
Failed to migrate the VM: Operation failed. Failed to execute the
operation. (Details: operation failed: guest CPU doesn't match
specification: missing features:
fma,movbe,bmi1,avx2,bmi2,invpcid,mpx,rdseed,adx,smap,clflushopt,pku,arch-capabilities,xsavec,xgetbv1,pdpe1gb,abm,3dnowprefetch)
[root@hachi ~]#

But VMs created on the other nodes will migrate perfectly well to the newer
node.

Seeing that there is absolutely no CPU feature on the newest node which is
critical to any of the VMs, are there some xml files that I could modify to
set a certain "least common denominator" CPU feature set, and freely allow
migration between them all?

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Has anyone used an openvz container as a gateway?

2022-12-20 Thread jjs - mainphrame
I've been on a hardware consolidation and virtualization kick, and have
been converting physical hosts in the office to openvz VMs.

I have a couple of physical boxes each connecting to an internet provider,
and acting as a firewall/gateway, among other things. I was able to convert
these to VMs, after adding the interfaces and creating the bridges and
networks, and it works as expected.

I thought it would be more efficient to use a container, and have been
testing with a container connected to an internal bridge, and an external
bridge. I haven't yet been able to figure out why it won't forward traffic
from the internal interface to the external interface, even though it's
connected to the same networks as the VM which is successfully doing so.

Is it possible to use a container for this, or am I trying to make a
container do something it was designed not to do?

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2022-12-11 Thread jjs - mainphrame
Thank you Maik, that's encouraging.

I can understand the decision on OVZ 8, and I look forward to moving to OVZ
9

Jake

On Sun, Dec 11, 2022 at 3:21 PM Maik Broemme 
wrote:

> Hi Jake,
>
>
>
> Actually OpenVZ 7 will not be the last stable and free solution. Needless
> to say that 2022 was a tough year for all of us and in terms of releases
> not much happened yet after OpenVZ 9 Alpha release. But it was not the last
> one, believe me.  There will be a Beta release in 2023 and also a RTM
> version of OpenVZ 9.
>
>
>
> Moreover we will skip OpenVZ 8 completely as RHEL9 became RTM in May 2022
> already and we want to move directly to latest upstream.
>
>
>
> *Maik Broemme *| Senior Product Manager | Virtuozzo |
> maik.broe...@virtuozzo.com | Skype: maikbroemme
>
>
>
> *Von:* users-boun...@openvz.org  *Im Auftrag
> von *jjs - mainphrame
> *Gesendet:* Sonntag, 11. Dezember 2022 19:35
> *An:* OpenVZ users 
> *Betreff:* Re: [Users] Status of OVZ 8 & 9
>
>
>
> Hello all,
>
>
>
> Is it safe to say that openvz 7 is essentially the end of the line in
> terms of an effective, free openvz solution? I've been looking at openvz 8
> & 9, and much as I want them to work, they don't seem to be viable.
>
> Thanks for any insight you can share.
>
> Jake
>
>
>
> On Thu, Dec 8, 2022 at 4:16 PM jjs - mainphrame 
> wrote:
>
> I've been running openvz 7 for some years, and I periodically check on the
> status of openvz 8 and 9.
>
> While openvz 7 has been getting updates, it seems openvz 8 is fairly
> static, and openvz 9 seems not ready for use.
>
> Is there an intent to continue support of openvz beyond version 7?
>
> Since openvz is a great advertisement for virtuozzo, it would be a shame
> if it faded away.
>
> Jake
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2022-12-11 Thread jjs - mainphrame
Yes, this is my concern.

As we move forward, we are always evaluating what are the safest bets, and
while I've favored openvz/virtuozzo since 2011, alternatives such as
proxmox are looking increasingly viable, given the fact that openvz
development appears to be stagnant.

At present, openvz has features that are superior to those in proxmox, if
they exist at all. Hopefully that lead won't be squandered.

As I've mentioned before, my personal use of openvz led directly to the
adoption of virtuozzo for our Linux infrastructure at a previous large
employer, and I'd hate to see the free version wither away into
irrelevance.

Jake



Jake

On Sun, Dec 11, 2022 at 11:22 AM Jehan PROCACCIA <
jehan.procac...@imtbs-tsp.eu> wrote:

> Hello
>
> indeed , we also use openvz7 (free virtuozzo) and are wondering what is
> best to install new servers hypervisors with , openvz8 or 9 ?
> is it not because we use free openvz that we don't support the virtuozzo
> businness, although we are a public school we do purchase a few virtuozzy
> hybrid server licences in order to participate to the maintenance and
> developpement of these great products .
> please let us know if we did the right choice ?
>
> Regards .
>
> *Jehan PROCACCIA*
> *Ingénieur systèmes et réseaux*
> *Directeur Technique réseau REVE : *
> *Réseau d’Évry Val d'Essonne*
> *Équipe THD (TSP/RST) *
> *01 60 76 44 36*
>
>
>
> --
> *De: *"jjs - mainphrame" 
> *À: *"OpenVZ users" 
> *Envoyé: *Dimanche 11 Décembre 2022 19:34:58
> *Objet: *Re: [Users] Status of OVZ 8 & 9
>
> Hello all,
> Is it safe to say that openvz 7 is essentially the end of the line in
> terms of an effective, free openvz solution? I've been looking at openvz 8
> & 9, and much as I want them to work, they don't seem to be viable.
>
> Thanks for any insight you can share.
>
> Jake
>
> On Thu, Dec 8, 2022 at 4:16 PM jjs - mainphrame 
> wrote:
>
>> I've been running openvz 7 for some years, and I periodically check on
>> the status of openvz 8 and 9.
>>
>> While openvz 7 has been getting updates, it seems openvz 8 is fairly
>> static, and openvz 9 seems not ready for use.
>>
>> Is there an intent to continue support of openvz beyond version 7?
>>
>> Since openvz is a great advertisement for virtuozzo, it would be a shame
>> if it faded away.
>>
>> Jake
>>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Status of OVZ 8 & 9

2022-12-11 Thread jjs - mainphrame
Hello all,

Is it safe to say that openvz 7 is essentially the end of the line in terms
of an effective, free openvz solution? I've been looking at openvz 8 & 9,
and much as I want them to work, they don't seem to be viable.

Thanks for any insight you can share.

Jake

On Thu, Dec 8, 2022 at 4:16 PM jjs - mainphrame  wrote:

> I've been running openvz 7 for some years, and I periodically check on the
> status of openvz 8 and 9.
>
> While openvz 7 has been getting updates, it seems openvz 8 is fairly
> static, and openvz 9 seems not ready for use.
>
> Is there an intent to continue support of openvz beyond version 7?
>
> Since openvz is a great advertisement for virtuozzo, it would be a shame
> if it faded away.
>
> Jake
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Status of OVZ 8 & 9

2022-12-08 Thread jjs - mainphrame
I've been running openvz 7 for some years, and I periodically check on the
status of openvz 8 and 9.

While openvz 7 has been getting updates, it seems openvz 8 is fairly
static, and openvz 9 seems not ready for use.

Is there an intent to continue support of openvz beyond version 7?

Since openvz is a great advertisement for virtuozzo, it would be a shame if
it faded away.

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ 9 Status

2022-11-30 Thread jjs - mainphrame
Actually, I did an update, which downloaded 344 packages, but would not
install any of them, complaining about a missing public key:


Last few lines of yum upgrade output:

Public key for yum-utils-4.0.24-4.vl9.noarch.rpm is not installed. Failing
package is: yum-utils-4.0.24-4.vl9.noarch
 GPG Keys are configured as: file:///etc/pki/rpm-gpg/VZLINUX_GPG_KEY,
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9
Public key for zip-3.0-33.vl9.x86_64.rpm is not installed. Failing package
is: zip-3.0-33.vl9.x86_64
 GPG Keys are configured as: file:///etc/pki/rpm-gpg/VZLINUX_GPG_KEY,
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9
Public key for zlib-1.2.11-31.vl9.1.x86_64.rpm is not installed. Failing
package is: zlib-1.2.11-31.vl9.1.x86_64
 GPG Keys are configured as: file:///etc/pki/rpm-gpg/VZLINUX_GPG_KEY,
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-9
The downloaded packages were saved in cache until the next successful
transaction.
You can remove cached packages by executing 'yum clean packages'.
Error: GPG check FAILED
[root@ovz9 ~]#


On Wed, Nov 30, 2022 at 3:38 PM jjs - mainphrame  wrote:

> I'm curious as well.
>
> I've downloaded an openvz9 image and have been testing with it, but the
> kernel is still the one from February.
>
> Hope to see some updates soon.
>
> Jake
>
>
>
> On Wed, Nov 30, 2022 at 3:04 PM Jonathan Wright 
> wrote:
>
>> Hi,
>>
>> As a followup to Maik's announcement here back in February
>> (https://lists.openvz.org/pipermail/users/2022-February/008175.html) I'm
>> wondering about the status of OpenVZ 9.  There appear to have been no
>> new builds or packages published to the repositories since those initial
>> uploads.  I would've at least expected a rebuild against RHEL's new
>> sources after the 9.0 GA.
>>
>> I'm hoping that OpenVZ 9 doesn't have the same fate as 8 - getting an
>> Alpha release and never making it past that :)
>>
>> --
>> Jonathan Wright
>> KnownHost, LLC
>> https://www.knownhost.com
>>
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ 9 Status

2022-11-30 Thread jjs - mainphrame
I'm curious as well.

I've downloaded an openvz9 image and have been testing with it, but the
kernel is still the one from February.

Hope to see some updates soon.

Jake



On Wed, Nov 30, 2022 at 3:04 PM Jonathan Wright 
wrote:

> Hi,
>
> As a followup to Maik's announcement here back in February
> (https://lists.openvz.org/pipermail/users/2022-February/008175.html) I'm
> wondering about the status of OpenVZ 9.  There appear to have been no
> new builds or packages published to the repositories since those initial
> uploads.  I would've at least expected a rebuild against RHEL's new
> sources after the 9.0 GA.
>
> I'm hoping that OpenVZ 9 doesn't have the same fate as 8 - getting an
> Alpha release and never making it past that :)
>
> --
> Jonathan Wright
> KnownHost, LLC
> https://www.knownhost.com
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vztop for virtuozzo7 and check ressources

2022-04-14 Thread jjs - mainphrame
There's always vzstat - it still works

Jake

On Thu, Apr 14, 2022 at 1:57 PM jehan Procaccia <
jehan.procac...@imtbs-tsp.eu> wrote:

> Thanks for your answer, but I don't see "CTID" with htop
>
> there's only :
>
> *PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND*
>
>
>
>
> *# cat /etc/redhat-release Virtuozzo Linux release 7.9 # rpm -q htop
> htop-2.2.0-3.el7.x86_64*
>
> then, there's no related/added value of htop regarding the sorting  of
> CTID processes, did I misses something ?
>
> ok for user_beancounters, i'll keep checking them to see if some are over
> limits , I guess that if it changes in vz 9 you'll let us know .
>
> Regarding these features (counters, vztop) , do you confirm that they
> should be avalaible regardless of using a server with licenced virtuozzo
> hybrid server or a free vzlinux server ?
>
> this still not clear to me what are the different features between the two
> , is there an online page that compares them ?
>
> To my knowledge licenced virtuzzo adds; Support, readyKernel, Storage,
> Backups . Are there other features/services  ?  perhaps ressource
> monitoring / vztop !?
>
> thanks .
>
> jehan .
>
>
>
> https://www.virtuozzo.com/company/blog/product-updates/virtuozzos-mature-linux-distribution-vzlinux-now-available-to-public/
> Le 08/04/2022 à 07:45, Vasily Averin a écrit :
>
> Dear Jehan,
>
> Sorry for the long response.
>
> On 4/1/22 00:27, jehan.procac...@tem-tsp.eu wrote:
>
> Hello
>
> in older openvz, there was vztop from hypervizor to check which CT/VM usages
>
> I cannot find which package provides vztop in virtuozzo 7 , is it still 
> available ?
>
> On my test node vztop is an alias
>
> [root@tom ~]# which vztop
> alias vztop='htop -s CTID'
>   /usr/bin/htop
>
> [root@tom ~]# rpm -qf /usr/bin/htop
> htop-2.2.0-1.vl7.1.x86_64
> [root@tom ~]# rpm -ql htop-2.2.0-1.vl7.1.x86_64
> /etc/profile.d/vztop.sh   <<< VvS: interesting
> /usr/bin/htop
> /usr/share/applications/htop.desktop
> /usr/share/doc/htop-2.2.0
> /usr/share/doc/htop-2.2.0/AUTHORS
> /usr/share/doc/htop-2.2.0/ChangeLog
> /usr/share/doc/htop-2.2.0/README
> /usr/share/licenses/htop-2.2.0
> /usr/share/licenses/htop-2.2.0/COPYING
> /usr/share/man/man1/htop.1.gz
> /usr/share/pixmaps/htop.png
>
> [root@tom ~]# cat /etc/profile.d/vztop.sh
> # only if no alias is already set
> alias vztop >/dev/null 2>&1 || alias vztop='htop -s CTID'
>
>
> is# cat /proc/user_beancounters  still the correct and recommended way to 
> check inside a CT the different ressources counters ?
>
> Yes, it still works correctly on vz7.
> howevrer I'm not sure about upcoming vz9.
>
> Also I would advise you to look at  Virtuozzo Hybrid Server 7.5 
> documentationhttps://docs.virtuozzo.com/master/index.htmlhttps://docs.virtuozzo.com/virtuozzo_hybrid_server_7_upgrade_guide/index.htmlhttps://docs.virtuozzo.com/virtuozzo_hybrid_server_7_users_guide/managing-virtual-machines-and-containers/index.html
>
> I hope it helps you.
>
> Thank you for the questions,
>   Vasily Averin
> ___
> Users mailing 
> listUsers@openvz.orghttps://lists.openvz.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] VZLinux 8, how to install vzctl, vzlist commands

2021-11-17 Thread jjs - mainphrame
Ah, good catch. We must have all just assumed he meant openvz 8.

Jake

On Tue, Nov 16, 2021 at 11:54 PM Denis Silakov 
wrote:

> Hi,
>
> the initial question was about VzLinux 8 and VzLinux != OpenVZ 
>
> If you want containers, you should first upgrade VzLinux to OpenVZ by
> running do-upgrade-vzlin-openvz8
>
> But note that OpenVZ 8 is in the alphs stage and not recommended for
> production use.
> --
> *From:* Website Solution - George 
> *Sent:* Wednesday, November 17, 2021 5:48 AM
> *To:* OpenVZ users ; Serg Parf ; Denis
> Silakov 
> *Subject:* Re: [Users] VZLinux 8, how to install vzctl, vzlist commands
>
>
>
> You may yum install vzctl
>
> However, I am not sure simfs still limited-legacy-support or not.
>
> I hope it still supports in minimal
>
> [root@openvz8-dev vz]# yum provides */vzctl
> Last metadata expiration check: 0:08:15 ago on Wed 17 Nov 2021 10:36:51 AM
> HKT.
> vzctl-8.0.2-1.vz8.x86_64 : Containers control utility
> Repo: @System
> Matched from:
> Filename: /etc/logrotate.d/vzctl
> Filename: /usr/libexec/vzctl
> Filename: /usr/sbin/vzctl
>
> vzctl-8.0.2-1.vz8.x86_64 : Containers control utility
> Repo: openvz-os
> Matched from:
> Filename: /etc/logrotate.d/vzctl
> Filename: /usr/libexec/vzctl
> Filename: /usr/sbin/vzctl
>
> Regards
> George
>
>
>
> On 17-Nov-21 12:43 AM, Serg Parf wrote:
>
> Hi,
>
> i just installed vzLinux 8
>
> I want to run my containers `VE_LAYOUT="simfs"` on vzLinux8
>
> i do not see any prlctl or vzctl/vzlist commands or rpms that provides them
>
> Do not find any instruction on
> https://docs.virtuozzo.com/master/index.html either
>
>
>
>
>
> ___
> Users mailing 
> listUsers@openvz.orghttps://lists.openvz.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] VZ8 testing

2021-09-22 Thread jjs - mainphrame
Migrating a stopped CT from VZ7 with vzmigrate worked well.

Migrating a stopped VM from VZ7 with prlctl migrate worked well

Migrating a running VM from VZ7 with prlctl migrate failed:

[root@hachi ~]# prlctl migrate centos8 vz4
Virtuozzo Linux release 8.0

Authorized use only. All activity may be monitored and reported.

Migrate the VM centos8 on vz4  ()
Operation progress100%
Operation progress ... 0%
Failed to migrate the VM: Operation failed. Failed to execute the
operation. (Details: internal error: process exited while connecting to
monitor: 2021-09-22T22:42:55.913562Z qemu-kvm: -blockdev
{"driver":"file","filename":"/vz/vmprivate/1c2dcc8b-601f-447e-8c61-96443cab0a79/harddisk.hdd","aio":"native","node-name":"libvirt-3-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}:
Could not open
'/vz/vmprivate/1c2dcc8b-601f-447e-8c61-96443cab0a79/harddisk.hdd':
Permission denied)
[root@hachi ~]#

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ 8.0 - Alpha Release

2021-09-22 Thread jjs - mainphrame
I migrated a CT from a VZ7 node to the VZ8 node, and it's running normally
but the vzlist output is buggy: (0 procs)

root@vz4 ~]# vzlist
CTID  NPROC STATUSCFG_IP_ADDR
HOSTNAME
1548  0 running   192.168.111.77
 hellboy
[root@vz4 ~]#

Should I file a bug, or is this known already?

Jake

On Tue, Sep 21, 2021 at 11:11 AM jjs - mainphrame 
wrote:

> Kudos, I've been waiting for this for a long time.
>
> I've migrated the CTs and VMs off of a vz7 host and offered it up for vz8
> testing. My initial reaction after vz8 install is relief that the useful
> and powerful vz* commands are still present. Now the real testing begins...
>
> Jake
>
> On Mon, Sep 20, 2021 at 8:07 AM Maik Broemme 
> wrote:
>
>> Hi,
>>
>> I'm pleased to announce the release of OpenVZ 8.0 Alpha Release. The new
>> release focuses on rebasing OpenVZ to the latest Red Hat Enterprise
>> Linux 8.4 with 4.18.0-x kernel.
>>
>> This version is not ready for production use. While we don't intend to
>> release broken builds, these releases are very early in development, and
>> not supported for production use, as they may contain errors, and any
>> resulting instability could cause crashes or data loss, so you should
>> not deploy this to a production environment. Instead, please try out the
>> builds in a lab or test environment.
>>
>> New Features
>> 
>>
>> * Simplified installation program.
>>
>> * The dispatcher component is now optional and not installed by default
>>   as are the 'prl*' tools like 'prlctl'. The primary API for managing
>>   virtual environments is now libvirt. The tools for managing virtual
>>   environments are now 'virsh' and 'virt-install'. The Parallels SDK as
>>   well as the 'prl*' tools can be installed for backward compatibility.
>>
>> * Supported guest operating systems for creating new containers and
>>   virtual machines are: VzLinux 7 and 8, Red Hat Enterprise Linux 7 and
>>   8, CentOS 7, AlmaLinux 8, Debian 10, Ubuntu 18.04 LTS and 20.04 LTS,
>>   SUSE Linux Enterprise Server 12 and 15. Additionally, these guest
>>   operating systems are supported in virtual machines only: Windows
>>   Server 2016, Windows Server 2019, Windows Server 2022. Migrated
>>   containers and virtual machines with older guest operating systems
>>   should continue to work.
>>
>> * OpenVZ 8.0 Alpha is based on Red Hat Enterprise Linux 8.4 as well as
>>   the 4.18.0-x kernels.
>>
>> * Libvirt storage pools and volumes can be used.
>>
>> * Problem reports can be collected and sent using the 'vzreport' tool.
>>
>> * The 'ip_conntrack_disable_ve0' option of the nf_conntrack kernel
>>   module has been dropped. Connection tracking is now enabled
>>   automatically if required by iptables rules. Otherwise, it remains
>>   disabled.
>>
>> Known Issues
>> 
>>
>> * Resizing container disks may fail
>>
>> * Microsoft Windows virtual machines created on this alpha build may not
>>   work
>>
>> * Virtual machine snapshots may not work
>>
>> * Network interface hotplugging may not work
>>
>> Download
>> 
>>
>> All binary components as well as installation ISO images are freely
>> available at the OpenVZ download server:
>>
>> https://download.openvz.org/virtuozzo/releases/8.0/
>>
>> https://download.openvz.org/virtuozzo/releases/8.0/x86_64/iso/openvz-iso-8.0.0-1336.iso
>>
>> and mirrors:
>>
>> https://mirrors.openvz.org/
>>
>> The source code of each component is available in the public repository:
>>
>> https://src.openvz.org/projects/OVZ
>>
>> Feedback
>> 
>>
>> Each release includes one or more changes and therefore we are looking to
>> our
>> user base in testing the new features and improvements of upcoming major
>> release and providing valued feedback.
>>
>> This allows critical issues or fixes to be addressed, please provide Alpha
>> feedback to OpenVZ users mailing list users@openvz.org or submitting a
>> bug in case of a serious issue to https://bugs.openvz.org/
>>
>> Sincerely,
>> Maik
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ 8.0 - Alpha Release

2021-09-21 Thread jjs - mainphrame
Kudos, I've been waiting for this for a long time.

I've migrated the CTs and VMs off of a vz7 host and offered it up for vz8
testing. My initial reaction after vz8 install is relief that the useful
and powerful vz* commands are still present. Now the real testing begins...

Jake

On Mon, Sep 20, 2021 at 8:07 AM Maik Broemme 
wrote:

> Hi,
>
> I'm pleased to announce the release of OpenVZ 8.0 Alpha Release. The new
> release focuses on rebasing OpenVZ to the latest Red Hat Enterprise
> Linux 8.4 with 4.18.0-x kernel.
>
> This version is not ready for production use. While we don't intend to
> release broken builds, these releases are very early in development, and
> not supported for production use, as they may contain errors, and any
> resulting instability could cause crashes or data loss, so you should
> not deploy this to a production environment. Instead, please try out the
> builds in a lab or test environment.
>
> New Features
> 
>
> * Simplified installation program.
>
> * The dispatcher component is now optional and not installed by default
>   as are the 'prl*' tools like 'prlctl'. The primary API for managing
>   virtual environments is now libvirt. The tools for managing virtual
>   environments are now 'virsh' and 'virt-install'. The Parallels SDK as
>   well as the 'prl*' tools can be installed for backward compatibility.
>
> * Supported guest operating systems for creating new containers and
>   virtual machines are: VzLinux 7 and 8, Red Hat Enterprise Linux 7 and
>   8, CentOS 7, AlmaLinux 8, Debian 10, Ubuntu 18.04 LTS and 20.04 LTS,
>   SUSE Linux Enterprise Server 12 and 15. Additionally, these guest
>   operating systems are supported in virtual machines only: Windows
>   Server 2016, Windows Server 2019, Windows Server 2022. Migrated
>   containers and virtual machines with older guest operating systems
>   should continue to work.
>
> * OpenVZ 8.0 Alpha is based on Red Hat Enterprise Linux 8.4 as well as
>   the 4.18.0-x kernels.
>
> * Libvirt storage pools and volumes can be used.
>
> * Problem reports can be collected and sent using the 'vzreport' tool.
>
> * The 'ip_conntrack_disable_ve0' option of the nf_conntrack kernel
>   module has been dropped. Connection tracking is now enabled
>   automatically if required by iptables rules. Otherwise, it remains
>   disabled.
>
> Known Issues
> 
>
> * Resizing container disks may fail
>
> * Microsoft Windows virtual machines created on this alpha build may not
>   work
>
> * Virtual machine snapshots may not work
>
> * Network interface hotplugging may not work
>
> Download
> 
>
> All binary components as well as installation ISO images are freely
> available at the OpenVZ download server:
>
> https://download.openvz.org/virtuozzo/releases/8.0/
>
> https://download.openvz.org/virtuozzo/releases/8.0/x86_64/iso/openvz-iso-8.0.0-1336.iso
>
> and mirrors:
>
> https://mirrors.openvz.org/
>
> The source code of each component is available in the public repository:
>
> https://src.openvz.org/projects/OVZ
>
> Feedback
> 
>
> Each release includes one or more changes and therefore we are looking to
> our
> user base in testing the new features and improvements of upcoming major
> release and providing valued feedback.
>
> This allows critical issues or fixes to be addressed, please provide Alpha
> feedback to OpenVZ users mailing list users@openvz.org or submitting a
> bug in case of a serious issue to https://bugs.openvz.org/
>
> Sincerely,
> Maik
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] yum update aborts due to ntfs issue

2021-09-14 Thread jjs - mainphrame
Thanks Denis,

It is resolved, updated perfectly with new ntfs drivers.

Jake

On Sun, Sep 12, 2021 at 9:51 PM Denis Silakov 
wrote:

> New ntfs-3g has landed into vzlinux repos, I think you can enable EPEL
> back.
> --
> *From:* users-boun...@openvz.org  on behalf of
> jjs - mainphrame 
> *Sent:* Saturday, September 11, 2021 10:33 PM
> *To:* OpenVZ users 
> *Subject:* Re: [Users] yum update aborts due to ntfs issue
>
> Thanks Denis,
>
> Disabled epel for now, will keep an eye on ntfs updates.
>
> Jake
>
> On Sat, Sep 11, 2021 at 11:20 AM Denis Silakov 
> wrote:
>
> You have EPEL enabled and looks like ne ntfs-3g from there can't properly
> replace the version installed from vzlinux repos. Youc can either disable
> EPEL for now, or just ignore ntfs-* packages from it, or wait a bit while
> we are testing this new ntfs-3g before merging it into VzLinux repos.
> ----------
> *From:* users-boun...@openvz.org  on behalf of
> jjs - mainphrame 
> *Sent:* Saturday, September 11, 2021 7:34 PM
> *To:* OpenVZ users 
> *Subject:* [Users] yum update aborts due to ntfs issue
>
> All of my openvz servers have this problem since a few days ago. Is there
> a problem with the repos?
>
> [root@vz3 ~]# yum update
>
> <... many lines trimmed...>
>
> Error: Package: 2:ntfsprogs-2017.3.23-11.vl7.1.x86_64
> (@virtuozzolinux-base)
>Requires: libntfs-3g.so.88()(64bit)
>Removing: 2:ntfs-3g-2017.3.23-11.vl7.1.x86_64
> (@virtuozzolinux-base)
>libntfs-3g.so.88()(64bit)
>Obsoleted By: 2:ntfs-3g-libs-2021.8.22-1.el7.x86_64 (epel)
>   ~libntfs-3g.so.89()(64bit)
>Available: 2:ntfs-3g-2017.3.23-11.vl7.x86_64
> (virtuozzolinux-base)
>libntfs-3g.so.88()(64bit)
>  You could try using --skip-broken to work around the problem
>  You could try running: rpm -Va --nofiles --nodigest
>
> Jake
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] yum update aborts due to ntfs issue

2021-09-11 Thread jjs - mainphrame
Thanks Denis,

Disabled epel for now, will keep an eye on ntfs updates.

Jake

On Sat, Sep 11, 2021 at 11:20 AM Denis Silakov 
wrote:

> You have EPEL enabled and looks like ne ntfs-3g from there can't properly
> replace the version installed from vzlinux repos. Youc can either disable
> EPEL for now, or just ignore ntfs-* packages from it, or wait a bit while
> we are testing this new ntfs-3g before merging it into VzLinux repos.
> --
> *From:* users-boun...@openvz.org  on behalf of
> jjs - mainphrame 
> *Sent:* Saturday, September 11, 2021 7:34 PM
> *To:* OpenVZ users 
> *Subject:* [Users] yum update aborts due to ntfs issue
>
> All of my openvz servers have this problem since a few days ago. Is there
> a problem with the repos?
>
> [root@vz3 ~]# yum update
>
> <... many lines trimmed...>
>
> Error: Package: 2:ntfsprogs-2017.3.23-11.vl7.1.x86_64
> (@virtuozzolinux-base)
>Requires: libntfs-3g.so.88()(64bit)
>Removing: 2:ntfs-3g-2017.3.23-11.vl7.1.x86_64
> (@virtuozzolinux-base)
>libntfs-3g.so.88()(64bit)
>Obsoleted By: 2:ntfs-3g-libs-2021.8.22-1.el7.x86_64 (epel)
>   ~libntfs-3g.so.89()(64bit)
>Available: 2:ntfs-3g-2017.3.23-11.vl7.x86_64
> (virtuozzolinux-base)
>libntfs-3g.so.88()(64bit)
>  You could try using --skip-broken to work around the problem
>  You could try running: rpm -Va --nofiles --nodigest
>
> Jake
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] yum update aborts due to ntfs issue

2021-09-11 Thread jjs - mainphrame
All of my openvz servers have this problem since a few days ago. Is there a
problem with the repos?

[root@vz3 ~]# yum update

<... many lines trimmed...>

Error: Package: 2:ntfsprogs-2017.3.23-11.vl7.1.x86_64 (@virtuozzolinux-base)
   Requires: libntfs-3g.so.88()(64bit)
   Removing: 2:ntfs-3g-2017.3.23-11.vl7.1.x86_64
(@virtuozzolinux-base)
   libntfs-3g.so.88()(64bit)
   Obsoleted By: 2:ntfs-3g-libs-2021.8.22-1.el7.x86_64 (epel)
  ~libntfs-3g.so.89()(64bit)
   Available: 2:ntfs-3g-2017.3.23-11.vl7.x86_64
(virtuozzolinux-base)
   libntfs-3g.so.88()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] bash_profile execution

2021-07-01 Thread jjs - mainphrame
That's odd. I've been using openvz since well, a long time. And as far as I
can remember, doing "vzctl enter" has never sourced the login profile,
since entering the CTin that fashion makes you basically sort of an unknown
uid 0 entity.

Jake





On Thu, Jul 1, 2021 at 10:21 AM Oleksiy Tkachenko 
wrote:

> but my bash_profile's were always executed before I run latest openVZ7
> updates.
> that's the point)
> is that bug?
>
>
> чт, 1 лип. 2021 о 20:05 jjs - mainphrame  пише:
>
>> It would run during an actual login. But entering the CT via vzctl enter
>> bypasses the normal login process.
>>
>> Jake
>>
>> On Thu, Jul 1, 2021 at 9:55 AM Oleksiy Tkachenko 
>> wrote:
>>
>>> That's works fine.
>>> But how to run that automatically during login?
>>>
>>> чт, 1 лип. 2021 о 18:19 jjs - mainphrame  пише:
>>>
>>>> This works for me:
>>>>
>>>> su -
>>>>
>>>> Jake
>>>>
>>>> On Thu, Jul 1, 2021 at 8:03 AM Oleksiy Tkachenko 
>>>> wrote:
>>>>
>>>>> I realize that bash_profile not executed automatically when "vzctl
>>>>> enter CID" now.
>>>>>
>>>>> How to execute it ?
>>>>> Thank you!
>>>>>
>>>>>
>>>>> --
>>>>> Olexiy
>>>>> ___
>>>>> Users mailing list
>>>>> Users@openvz.org
>>>>> https://lists.openvz.org/mailman/listinfo/users
>>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@openvz.org
>>>> https://lists.openvz.org/mailman/listinfo/users
>>>>
>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] bash_profile execution

2021-07-01 Thread jjs - mainphrame
It would run during an actual login. But entering the CT via vzctl enter
bypasses the normal login process.

Jake

On Thu, Jul 1, 2021 at 9:55 AM Oleksiy Tkachenko  wrote:

> That's works fine.
> But how to run that automatically during login?
>
> чт, 1 лип. 2021 о 18:19 jjs - mainphrame  пише:
>
>> This works for me:
>>
>> su -
>>
>> Jake
>>
>> On Thu, Jul 1, 2021 at 8:03 AM Oleksiy Tkachenko 
>> wrote:
>>
>>> I realize that bash_profile not executed automatically when "vzctl enter
>>> CID" now.
>>>
>>> How to execute it ?
>>> Thank you!
>>>
>>>
>>> --
>>> Olexiy
>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] bash_profile execution

2021-07-01 Thread jjs - mainphrame
This works for me:

su -

Jake

On Thu, Jul 1, 2021 at 8:03 AM Oleksiy Tkachenko  wrote:

> I realize that bash_profile not executed automatically when "vzctl enter
> CID" now.
>
> How to execute it ?
> Thank you!
>
>
> --
> Olexiy
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] virtuozzo base OS and new centos 8 orientations

2020-12-10 Thread jjs - mainphrame
>From what I've heard, Red Hat will continue to make their source available
in the same repos as before, so even if Centos ends, Virtuozzo can still
build from the same RH source repos that Centos had been pulling from.

Jake



On Thu, Dec 10, 2020 at 9:12 AM jehan Procaccia tem-tsp <
jehan.procac...@tem-tsp.eu> wrote:

> Hello
>
> virtuozzo 7 OS is based on centos 7, I guessed  from :
>
>
> https://www.virtuozzo.com/connect/details/blog/view/an-overview-of-virtuozzo-linux-7.html
>
> => Virtuozzo Linux 7 is based on the CentOS7 distribution and offers full
> compatibility with CentOS and the RedHat family.
>
> Then what will be the base of virtuozzo 8 ? regarding that annoncement:
> https://blog.centos.org/2020/12/future-is-centos-stream/
>
> the threaded comments are furious about that decision, would virtuozzo 8
> rebuild fron RHEL source directly or base on another distrib ?
> https://linux.oracle.com/switch/centos/
> Gregory Kurtzer: https://rockylinux.org/
>
> openSUSE
>
> etc ...
>
> Regards .
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Virtuozzo containers no longer a supported Virtuozzo product !?

2020-12-04 Thread jjs - mainphrame
I hear what you are saying about deploying lxc on any Linux distro. That is
a strong point.

But for me it's a good tradeoff. Maybe someday lxc/lxd will reach the level
of openvz, and if so, I'll re-evaluate, but for now I'm fine with setting
up dedicated openvz boxes, as I can deploy any Linux distro in a container,
or any OS in a VM.

Jake



On Fri, Dec 4, 2020 at 12:09 PM Narcis Garcia  wrote:

> I'm migrating servers from OpenVZ to LXC (by using ctctl) because I can
> deploy LXC on any GNU/Linux distro and archichecture.
>
> BUT: LXC still does not work as optimal as OpenVZ, and OpenVZ is far
> more mature than LXC.
>
>
>
> Narcis Garcia
> El 4/12/20 a les 20:15, jjs - mainphrame ha escrit:
> > I think it's just that virtuozzo is no longer supporting the "containers
> > only" solution. The new baseline is "containers and VMs".
> >
> > I agree they might have made that more clear, but it seems there's no
> > cause for worry. I've done long term testing with lxc/lxd and after
> > various issues, ended up moving all containers to openvz.
> >
> > The ability to do VMs is a plus, for instance if I have to hold my nose
> > and spin up a windows VM for testing.
> >
> > Jake
> >
> >
> >
> > On Fri, Dec 4, 2020 at 11:06 AM Jehan Procaccia IMT
> > mailto:jehan.procac...@imtbs-tsp.eu>>
> wrote:
> >
> > then,  is this a "marketing" miss leading information ? , or
> > Containers (CT) , which are to me the most added value of virtuozzo
> > technology is to be terminated ?
> > that should be claryfied by virtuozzo staff.
> >
> > indeed in
> > https://www.virtuozzo.com/products/virtuozzo-hybrid-server.html ,
> > containers => https://www.virtuozzo.com/products/compute.html
> > are mentionned
> > and in
> >
> https://www.virtuozzo.com/fileadmin/user_upload/downloads/Data_Sheets/Virtuozzo7-Platform-DS-EN-Ltr.pdf
> >
> > I hardly defend virtuozzo/openVZ vs proxmox in my community because
> > of VZ CTs which are supposed by far better than LXC containers (!?)
> >
> > Thanks to prove me right .
> >
> > regards .
> >
> >
> > Le 04/12/2020 à 19:48, jjs - mainphrame a écrit :
> >> That looked strange to me, but after looking at their website, it
> >> seems they're just announcing the end of support for old product
> >> lines.
> >>
> >> It looks like "Virtuozzo Hybrid Server" is basically what we have
> >> in openvz 7, plus premium features.
> >>
> >> Joe
> >>
> >> On Fri, Dec 4, 2020 at 10:36 AM Jehan Procaccia IMT
> >>  >> <mailto:jehan.procac...@imtbs-tsp.eu>> wrote:
> >>
> >> Hello
> >>
> >> defending the added value of virtuozzo containers (CT) , one
> >> replied me with :
> >>
> >>
> https://www.virtuozzo.com/support/all-products/virtuozzo-containers.html
> >>
> >> /*Please note*: Virtuozzo Containers for Linux is no longer a
> >> supported Virtuozzo product. Users can purchase extended
> >> support until September 2020./
> >>
> >> is this serious !?
> >>
> >> Please let us know .
> >>
> >> Regards .
> >>
> >> ___
> >> Users mailing list
> >> Users@openvz.org <mailto:Users@openvz.org>
> >> https://lists.openvz.org/mailman/listinfo/users
> >>
> >>
> >> ___
> >> Users mailing list
> >> Users@openvz.org <mailto:Users@openvz.org>
> >> https://lists.openvz.org/mailman/listinfo/users
> >
> >
> >
> > ___
> > Users mailing list
> > Users@openvz.org
> > https://lists.openvz.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Virtuozzo containers no longer a supported Virtuozzo product !?

2020-12-04 Thread jjs - mainphrame
I think it's just that virtuozzo is no longer supporting the "containers
only" solution. The new baseline is "containers and VMs".

I agree they might have made that more clear, but it seems there's no cause
for worry. I've done long term testing with lxc/lxd and after various
issues, ended up moving all containers to openvz.

The ability to do VMs is a plus, for instance if I have to hold my nose and
spin up a windows VM for testing.

Jake



On Fri, Dec 4, 2020 at 11:06 AM Jehan Procaccia IMT <
jehan.procac...@imtbs-tsp.eu> wrote:

> then,  is this a "marketing" miss leading information ? , or Containers
> (CT) , which are to me the most added value of virtuozzo technology is to
> be terminated ?
> that should be claryfied by virtuozzo staff.
>
> indeed in https://www.virtuozzo.com/products/virtuozzo-hybrid-server.html
> , containers => https://www.virtuozzo.com/products/compute.html
> are mentionned
> and in
> https://www.virtuozzo.com/fileadmin/user_upload/downloads/Data_Sheets/Virtuozzo7-Platform-DS-EN-Ltr.pdf
>
> I hardly defend virtuozzo/openVZ vs proxmox in my community because of VZ
> CTs which are supposed by far better than LXC containers (!?)
>
> Thanks to prove me right .
>
> regards .
>
>
> Le 04/12/2020 à 19:48, jjs - mainphrame a écrit :
>
> That looked strange to me, but after looking at their website, it seems
> they're just announcing the end of support for old product lines.
>
> It looks like "Virtuozzo Hybrid Server" is basically what we have in
> openvz 7, plus premium features.
>
> Joe
>
> On Fri, Dec 4, 2020 at 10:36 AM Jehan Procaccia IMT <
> jehan.procac...@imtbs-tsp.eu> wrote:
>
>> Hello
>>
>> defending the added value of virtuozzo containers (CT) , one replied me
>> with :
>>
>> https://www.virtuozzo.com/support/all-products/virtuozzo-containers.html
>>
>> *Please note: Virtuozzo Containers for Linux is no longer a supported
>> Virtuozzo product. Users can purchase extended support until
>> September 2020.*
>>
>> is this serious !?
>>
>> Please let us know .
>>
>> Regards .
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>
> ___
> Users mailing 
> listUsers@openvz.orghttps://lists.openvz.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Virtuozzo containers no longer a supported Virtuozzo product !?

2020-12-04 Thread jjs - mainphrame
That looked strange to me, but after looking at their website, it seems
they're just announcing the end of support for old product lines.

It looks like "Virtuozzo Hybrid Server" is basically what we have in openvz
7, plus premium features.

Joe

On Fri, Dec 4, 2020 at 10:36 AM Jehan Procaccia IMT <
jehan.procac...@imtbs-tsp.eu> wrote:

> Hello
>
> defending the added value of virtuozzo containers (CT) , one replied me
> with :
>
> https://www.virtuozzo.com/support/all-products/virtuozzo-containers.html
>
> *Please note: Virtuozzo Containers for Linux is no longer a supported
> Virtuozzo product. Users can purchase extended support until
> September 2020.*
>
> is this serious !?
>
> Please let us know .
>
> Regards .
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Update vzlinux-8 template: crash and burn

2020-11-17 Thread jjs - mainphrame
Thanks, it's good now.

Jake

On Tue, Nov 17, 2020 at 3:32 AM Denis Silakov 
wrote:

> Caused by rebase to 8.3. Should be ok now, please check.
> --
> *From:* users-boun...@openvz.org  on behalf of
> jjs - mainphrame 
> *Sent:* Tuesday, November 17, 2020 2:17 AM
> *To:* OpenVZ users 
> *Subject:* [Users] Update vzlinux-8 template: crash and burn
>
> Looks like the vzlinux-8 template is broken. Should I file a bug?
>
> [root@annie ~]# vzpkg update cache vzlinux-8-x86_64
> Update OS template cache for vzlinux-8-x86_64 template
> Cache was expired
> 0 files removed
> base0   1.0 MB/s | 7.8 MB
> 00:07
> base1   269  B/s | 257  B
> 00:00
> Metadata cache created.
> Last metadata expiration check: 0:00:01 ago on Mon Nov 16 21:47:09 2020.
> The operation has completed successfully.
> mke2fs 1.42.9 (28-Dec-2013)
> Discarding device blocks: done
> Filesystem label=
> OS type: Linux
> Block size=4096 (log=2)
> Fragment size=4096 (log=2)
> Stride=0 blocks, Stripe width=0 blocks
> 655360 inodes, 2620928 blocks
> 131046 blocks (5.00%) reserved for the super user
> First data block=0
> Maximum filesystem blocks=2151677952
> 80 block groups
> 32768 blocks per group, 32768 fragments per group
> 8192 inodes per group
> Superblock backups stored on blocks:
> 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
>
> Allocating group tables: done
> Writing inode tables: done
> Creating journal (32768 blocks): done
> Writing superblocks and filesystem accounting information: done
>
> tune2fs 1.42.9 (28-Dec-2013)
> Setting maximal mount count to -1
> Setting error behavior to 2
> Setting interval between checks to 0 seconds
> base0   7.8 kB/s | 3.9 kB
> 00:00
> base1   6.3 kB/s | 2.9 kB
> 00:00
> Error:
>  Problem: package redhat-rpm-config-123-1.vl8.noarch requires annobin, but
> none of the providers can be installed
>   - package annobin-9.35-1.vl8.x86_64 requires gcc >= 8, but none of the
> providers can be installed
>   - package perl-devel-4:5.26.3-416.vl8.x86_64 requires redhat-rpm-config,
> but none of the providers can be installed
>   - package gcc-8.4.1-1.vl8.x86_64 requires glibc-devel >= 2.2.90-12, but
> none of the providers can be installed
>   - package perl-4:5.26.3-416.vl8.x86_64 requires perl-devel(x86-64) =
> 4:5.26.3-416.vl8, but none of the providers can be installed
>   - package glibc-devel-2.28-136.vl8.x86_64 requires glibc-headers, but
> none of the providers can be installed
>   - package glibc-devel-2.28-136.vl8.x86_64 requires glibc-headers =
> 2.28-136.vl8, but none of the providers can be installed
>   - conflicting requests
>   - nothing provides kernel-headers >= 2.2.1 needed by
> glibc-headers-2.28-136.vl8.x86_64
>   - nothing provides kernel-headers needed by
> glibc-headers-2.28-136.vl8.x86_64
> (try to add '--skip-broken' to skip uninstallable packages or '--nobest'
> to use not only best candidate packages)
> Error: /usr/share/vzyum/bin/yum failed, exitcode=1
> Error: Failed to umount ploop image
> /vz/tmp//vzpkg.oJJkBg/cache-private/root.hdd: Error in ploop_umount_image
> (ploop.c:2649): Image /vz/tmp/vzpkg.oJJkBg/cache-private/root.hdd/root.hds
> is not mounted 40
> [root@annie ~]#
>
> Jake
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Update vzlinux-8 template: crash and burn

2020-11-16 Thread jjs - mainphrame
Looks like the vzlinux-8 template is broken. Should I file a bug?

[root@annie ~]# vzpkg update cache vzlinux-8-x86_64
Update OS template cache for vzlinux-8-x86_64 template
Cache was expired
0 files removed
base0   1.0 MB/s | 7.8 MB
00:07
base1   269  B/s | 257  B
00:00
Metadata cache created.
Last metadata expiration check: 0:00:01 ago on Mon Nov 16 21:47:09 2020.
The operation has completed successfully.
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2620928 blocks
131046 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

tune2fs 1.42.9 (28-Dec-2013)
Setting maximal mount count to -1
Setting error behavior to 2
Setting interval between checks to 0 seconds
base0   7.8 kB/s | 3.9 kB
00:00
base1   6.3 kB/s | 2.9 kB
00:00
Error:
 Problem: package redhat-rpm-config-123-1.vl8.noarch requires annobin, but
none of the providers can be installed
  - package annobin-9.35-1.vl8.x86_64 requires gcc >= 8, but none of the
providers can be installed
  - package perl-devel-4:5.26.3-416.vl8.x86_64 requires redhat-rpm-config,
but none of the providers can be installed
  - package gcc-8.4.1-1.vl8.x86_64 requires glibc-devel >= 2.2.90-12, but
none of the providers can be installed
  - package perl-4:5.26.3-416.vl8.x86_64 requires perl-devel(x86-64) =
4:5.26.3-416.vl8, but none of the providers can be installed
  - package glibc-devel-2.28-136.vl8.x86_64 requires glibc-headers, but
none of the providers can be installed
  - package glibc-devel-2.28-136.vl8.x86_64 requires glibc-headers =
2.28-136.vl8, but none of the providers can be installed
  - conflicting requests
  - nothing provides kernel-headers >= 2.2.1 needed by
glibc-headers-2.28-136.vl8.x86_64
  - nothing provides kernel-headers needed by
glibc-headers-2.28-136.vl8.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to
use not only best candidate packages)
Error: /usr/share/vzyum/bin/yum failed, exitcode=1
Error: Failed to umount ploop image
/vz/tmp//vzpkg.oJJkBg/cache-private/root.hdd: Error in ploop_umount_image
(ploop.c:2649): Image /vz/tmp/vzpkg.oJJkBg/cache-private/root.hdd/root.hds
is not mounted 40
[root@annie ~]#

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Repo issues with vzlinux 8 container

2020-10-02 Thread jjs - mainphrame
Still some rough edges here - bug, report?  Or is this known?

[root@vzlinux8 ~]# yum update -y --allowerasing --nobest
Last metadata expiration check: 0:01:12 ago on Fri Oct  2 08:49:41 2020.
Dependencies resolved.

 Problem: cannot install the best update candidate for package
perl-Convert-UUlib-2:1.6-1.el8.x86_64
  - problem with installed package perl-Convert-UUlib-2:1.6-1.el8.x86_64
  - nothing provides perl(common::sense) needed by
perl-Convert-UUlib-2:1.71-1.el8.x86_64

 PackageArchitecture   Version   Repository
   Size

Skipping packages with broken dependencies:
 perl-Convert-UUlib x86_64 2:1.71-1.el8  epel
  242 k

Transaction Summary

Skip  1 Package

Nothing to do.
Complete!
[root@vzlinux8 ~]#

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Virtuozzo Technical Webinar (roadmap slides included): 8AM EDT / 2PM CEST September 29, 2020

2020-09-29 Thread jjs - mainphrame
Thanks for this, I look forward to the replay. Please keep us posted.

Jake

On Mon, Sep 28, 2020 at 9:01 AM Konstantin Khorenko 
wrote:

> On 09/28/2020 06:03 PM, jjs - mainphrame wrote:
>
> This is much appreciated, but is scheduled to occur while I am asleep.
>
> Will there be a recording available for later viewing?
>
>
> Yes, the webinar is planned to be recorded.
>
>
> Joe
>
> On Mon, Sep 28, 2020 at 4:38 AM Konstantin Khorenko <
> khore...@virtuozzo.com> wrote:
>
>> Hi All,
>>
>> there will be a technical webinar from Virtuozzo with some official
>> information about roadmap,
>> so all who are interested are very welcome!
>>
>> The registration link is available at openvz.org,
>> for simplicity here as well: https://bit.ly/2Ekkodo
>>
>> Speakers: director of our RnD and our Product Manager.
>>
>> Hope, that will vanish all the doubts about the future - there will be
>> OVZ8!
>>
>> Have a nice day!
>>
>> --
>> Best regards,
>>
>> Konstantin Khorenko,
>> Virtuozzo Linux Kernel Team
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing 
> listUsers@openvz.orghttps://lists.openvz.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Virtuozzo Technical Webinar (roadmap slides included): 8AM EDT / 2PM CEST September 29, 2020

2020-09-28 Thread jjs - mainphrame
This is much appreciated, but is scheduled to occur while I am asleep.

Will there be a recording available for later viewing?

Joe

On Mon, Sep 28, 2020 at 4:38 AM Konstantin Khorenko 
wrote:

> Hi All,
>
> there will be a technical webinar from Virtuozzo with some official
> information about roadmap,
> so all who are interested are very welcome!
>
> The registration link is available at openvz.org,
> for simplicity here as well: https://bit.ly/2Ekkodo
>
> Speakers: director of our RnD and our Product Manager.
>
> Hope, that will vanish all the doubts about the future - there will be
> OVZ8!
>
> Have a nice day!
>
> --
> Best regards,
>
> Konstantin Khorenko,
> Virtuozzo Linux Kernel Team
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] openvz-diff-backups - survival guide - part one

2020-09-11 Thread jjs - mainphrame
Thanks for this, it's better than what we had before.

Jake

On Fri, Sep 11, 2020 at 5:24 PM tranxene50 <
tranxen...@openvz-diff-backups.fr> wrote:

> Hello!
>
> Here is the first part of a quick "survival" guide in order to start off
> on the right foot with openvz-diff-backups (OVZDB for short).
>
> Please, be aware that English is not my native language. So, if you see
> something weird, please quote the sentence and correct it.
>
> Equally, if something is not clear, quote and ask: I will try to answer
> the best as I can.
>
> # -
>
> Firstly, you need to be aware that OVZDB use three
> "hosts/locations/storages" and "navigate" through them:
>
> # -
>
> - SOURCE : "host" where OVZDB is installed
>
> Most of the time, this is the server on which OpenVZ is running the
> containers you want to backup.
>
> But it can be any *nix system (with Bash/OpenSSH/rsync) in order to
> replicate (upload or download) backups between REMOTE and MASTER.
>
> Everything works over SSH as follow: SOURCE -> SSH key 1 -> MASTER ->
> SSH key 2 -> REMOTE
>
> # -
>
> - MASTER : *mandatory* "host" where backups are stored (copy A)
>
> Ideally, MASTER is a dedicated server/VPS/other because OVZDB relies on
> IOPS and, the more RAM you will have to cache dentries and inodes, the
> faster OVZDB will be.
>
> However, by default, backups are stored on the the same server
> (MASTER_SSH_PATH="root@localhost:/home/backup/openvz-diff-backups").
>
> This is useful if you want to test ASAP or if you have a secondary drive
> where backups can be stored (ex: sda for OpenVZ, sdb for backups).
>
> In this case, SOURCE will communicate with MASTER (both being on the
> same server) using SSH through localhost: as soon as "ssh -p 22
> root@127.0.0.1" gives you a shell without asking for a password, you are
> done.
>
> On the contrary, if MASTER is a distant host (recommended), you need to
> adjust MASTER_SSH_PATH parameter.
>
> Ex:
> MASTER_SSH_PATH="r...@backup.my-server.net:/any-absolute-path-you-want"(trailing
>
> slash is not needed and "backup.my-server.net" will always be resolved
> to its IPV4 or IPV6 address)
>
> If you need to use a SSH port different from 22, please see
> MASTER_SSH_OPTIONS parameter in config file (openvz-diff-backups.conf).
>
> # -
>
> - REMOTE : *optional* host where backups are replicated (copy B)
>
> In order to secure backups, you may want to replicate them, if possible,
> in a different geographical location.
>
> MASTER/REMOTE "hosts" can be anything as long as a *nix system is
> present with a shell, OpenSSH (other SSH servers have not been tested
> yet) and, the most important, rsync.
>
> This can be a big fat dedicated server, a large VPS, a medium instance
> in the Cloud, a NAS at home or even - if someone is willing to test (I
> didn't because mine is too old) - an Android smartphone...
>
> SOURCE "host" always requires a Bash shell but MASTER/REMOTE "hosts"
> only need a shell (sh/dash/ash/etc) and OVZDB can also deal with
> "Busybox" instead of using standard Unix tools.
>
> In short, OVZDB does not care and will run as long as the "host" can
> handle it (which can take hours/days on very low-end hardware).
>
> # -
>
>  From SOURCE, you can launch any task (more details in part 2):
>
> - backup task will "convert" containers present on SOURCE into backups
> on MASTER
>
> - restore task will "convert" backups present on MASTER into containers
> on SOURCE
>
> - upload task will replicate backups present on MASTER to REMOTE (push)
>
> - download task will replicate backups present on REMOTE to MASTER (pull)
>
> - delete task will remove backups present on MASTER and/or REMOTE(you
> choose)
>
> - destroy task will wipe "cache" present on MASTER and/or REMOTE (more
> in part 2 because it is not intuitive)
>
> - update task will check and/or update OVZDB to its latest version
> ("one-click" upgrade)
>
> # -
>
> Before going into details about each command, here are some use case
> scenarios about backups:
>
> (to be shorter, I will not talk about migrating IP addresses, adjusting
> firewalls, replacing a dedicated server and other things)
>
> - 1 server
>
> Your only choice is to store backups on the same server, if possible on
> a secondary hard drive or, better, on an external hard drive.
>
> Long story short, if you are a believer, pray! ^^
>
> - 2 servers(one for prod, one for backup)
>
> If you have enough space, store backups on prod server (copy A) and
> replicate them (push) on backup server (copy B).
>
> (or, better, on backup server, replicate backups using "pull" mode: this
> is safer because it would require that both server are compromised to
> loose all your backups)
>
> Then, use OVZDB on backup server and restore every container on a daily
> basis to speed things in the event of an emergency "switch".
>
> This way, if prod server crash, you can restore containers on backup
> 

Re: [Users] Ploop Incremental backups strategy

2020-09-09 Thread jjs - mainphrame
Thank you tranxene50, I've installed openvz-diff-backups on my ovz-7 hosts,
and so far it looks very promising.

Jake

On Tue, Sep 8, 2020 at 4:05 PM tranxene50 
wrote:

> Hello!
>
> Please forgive my bad English, I live in France.
>
> A few years ago, as a hobby in the beginning, I created a file based
> "incremental" backup tool (that heavily relies on rsync) :
> https://www.openvz-diff-backups.fr
>
> It works flawlessly with OpenVZ 6 but only have been rapidly tested with
> OpenVZ 7.
>
> The main difference between OpenVZ 6 and 7 is that memory dump
> (checkpoint) is now done using CRIU instead of OpenVZ "Legacy" kernel.
>
> But,  globally, the process is still the same: create a ploop snapshot,
> mount it, sync it (with rsync) and then create a "diff" backup (using
> rsync again with --link-dest).
>
> As far as I know, this is one of the rarest GPL tools (any hint
> appreciated!) able to backup/restore CT files and, most importantly,
> full memory state.
>
> Restoring a "live" backup is like resuming an OS (Windows, Linux, MacOS,
> etc) after it had been put to sleep: the container will resume and works
> again, just like nothing had happen.
>
> So you can cheat and pretend 100% uptime even if a container was down
> most of the time... (evidently this a joke and, please, do not play at
> this game: /var/log/* will betray you)
>
> Note: to migrate CT between OpenVZ 6 and 7, you must use "cold" backups
> because memory dumps are incompatible (OpenVZ Kernel vs CRIU).
>
> To answer your questions:
>
> 1) you can restore any previous backup because each one is considered as
> a full backup (no diff/incremental computing: it is just
> files/directories/other - and hard links)
>
> 2) because backups are just a bunch of files (and mostly hard links),
> you can easily browse any backup of a CT and copy any files/directories
> needed
>
> At the moment, development of openvz-diff-backups is on pause - because
> it fulfills all my needs with OpenVZ 6 - but I am in the process of
> moving to OpenVZ 7 in a few months.
>
> So, if you encounter a bug or an issue, please leave me a message: the
> tool has a very conservative approach and is designed to cleanly stop if
> anything unexpected/unknown/abnormal happens.
>
> Have a nice day!
>
> Le 08/09/2020 à 14:24, mailingl...@tikklik.nl a écrit :
> > Hello,
> >
> > Using openvz7
> > Im looking ad my backup strategy
> > I allways used rsync on openvz6
> > And looks like on openvz7 this can also be done on /vz/root/VEID
> > But i dont know if i can do a full restore from that...
> >
> >
> > Now im looking at snapshots
> > https://github.com/TamCore/vzpbackup
> > it can make full backups and incremental backups
> >
> > Is someone using this script?
> > i have some question hope someone can help
> >
> > the incremental backups are nice to save space and time on a remote
> > backupserver
> > But how does a restore works.
> > For what i know you can only restore a full snapshot, the incremental
> > backups have only the changed files
> >
> > Question 2.
> > Is it possible to extract a file from the backup for a single file
> restore?
> > And if so can someone tell me how?
> >
> >
> > Thanxs
> > Steffan
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > ___
> > Users mailing list
> > Users@openvz.org
> > https://lists.openvz.org/mailman/listinfo/users
>
> --
> tranxene50
> tranxen...@openvz-diff-backups.fr
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread jjs - mainphrame
Thanks for that sanity check, the conundrum is resolved. vzlinux-release
and virtuozzo-release are indeed different things.

Jake

On Thu, Jul 2, 2020 at 10:27 AM Jonathan Wright 
wrote:

> /etc/redhat-release and /etc/virtuozzo-release are two different things.
> On 7/2/20 12:16 PM, jjs - mainphrame wrote:
>
> Jehan -
>
> I get the same output here -
>
> [root@annie ~]# yum repolist  |grep virt
> virtuozzolinux-baseVirtuozzoLinux Base
>  15,415+189
> virtuozzolinux-updates VirtuozzoLinux Updates
>  0
>
> I'm baffled as to how you're on 7.8.0 while I'm at 7.0,15 even though I'm
> fully up to date.
>
> # uname -a
> Linux annie.ufcfan.org 3.10.0-1127.8.2.vz7.151.10 #1 SMP Mon Jun 1
> 19:05:52 MSK 2020 x86_64 x86_64 x86_64 GNU/Linux
>
> Jake
>
> On Thu, Jul 2, 2020 at 10:08 AM Jehan PROCACCIA <
> jehan.procac...@imtbs-tsp.eu> wrote:
>
>> no factory , just repos virtuozzolinux-base and openvz-os
>>
>> # yum repolist  |grep virt
>> virtuozzolinux-baseVirtuozzoLinux Base15
>> 415+189
>> virtuozzolinux-updates VirtuozzoLinux
>> Updates  0
>>
>> Jehan .
>>
>> --
>> *De: *"jjs - mainphrame" 
>> *À: *"OpenVZ users" 
>> *Cc: *"Kevin Drysdale" 
>> *Envoyé: *Jeudi 2 Juillet 2020 18:22:33
>> *Objet: *Re: [Users] Issues after updating to 7.0.14 (136)
>>
>> Jehan, are you running factory?
>>
>> My ovz hosts are up to date, and I see:
>>
>> [root@annie ~]# cat /etc/virtuozzo-release
>> OpenVZ release 7.0.15 (222)
>>
>> Jake
>>
>>
>> On Thu, Jul 2, 2020 at 9:08 AM Jehan Procaccia IMT <
>> jehan.procac...@imtbs-tsp.eu> wrote:
>>
>>> "updating to 7.0.14 (136)" !?
>>>
>>> I did an update yesterday , I am far behind that version
>>>
>>> *# cat /etc/vzlinux-release*
>>> *Virtuozzo Linux release 7.8.0 (609)*
>>>
>>> *# uname -a *
>>> *Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9 12:58:54
>>> MSK 2020 x86_64 x86_64 x86_64 GNU/Linux*
>>>
>>> why don't you try to update to latest version ?
>>>
>>>
>>> Le 29/06/2020 à 12:30, Kevin Drysdale a écrit :
>>>
>>> Hello,
>>>
>>> After updating one of our OpenVZ VPS hosting nodes at the end of last
>>> week, we've started to have issues with corruption apparently occurring
>>> inside containers.  Issues of this nature have never affected the node
>>> previously, and there do not appear to be any hardware issues that could
>>> explain this.
>>>
>>> Specifically, a few hours after updating, we began to see containers
>>> experiencing errors such as this in the logs:
>>>
>>> [90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25
>>> [90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255:
>>> ext4_ext_find_extent:904: inode 136399
>>> [90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922:
>>> ext4_ext_find_extent:904: inode 136399
>>> [95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67
>>> [95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174:
>>> htree_dirblock_to_tree:918: inode 926441: block 3683060
>>> [95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902:
>>> ext4_iget:4435: inode 1849777
>>> [95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42
>>> [95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489:
>>> ext4_ext_find_extent:904: inode 136272
>>> [95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063:
>>> ext4_ext_find_extent:904: inode 136272
>>>
>>> Shutting the containers down and manually mounting and e2fsck'ing their
>>> filesystems did clear these errors, but each of the containers (which were
>>> mostly used for running Plesk) had widespread issues with corrupt or
>>> missing files after the fsck's completed, necessitating their being
>>> restored from backup.
>>>
>>> Concurrently, we also began to see messages like this appearing in
>>> /var/log/vzctl.log, which again have never appeared at any point prior to
>>> this update being installed:
>>>
>>> /var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole
>>> (check.c:240): Warning: ploop image '/vz/private/8288448/root.hdd/root.hds'
>>> is sparse
>>> /var/l

Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread jjs - mainphrame
Jehan -

I get the same output here -

[root@annie ~]# yum repolist  |grep virt
virtuozzolinux-baseVirtuozzoLinux Base
 15,415+189
virtuozzolinux-updates VirtuozzoLinux Updates
   0

I'm baffled as to how you're on 7.8.0 while I'm at 7.0,15 even though I'm
fully up to date.

# uname -a
Linux annie.ufcfan.org 3.10.0-1127.8.2.vz7.151.10 #1 SMP Mon Jun 1 19:05:52
MSK 2020 x86_64 x86_64 x86_64 GNU/Linux

Jake

On Thu, Jul 2, 2020 at 10:08 AM Jehan PROCACCIA <
jehan.procac...@imtbs-tsp.eu> wrote:

> no factory , just repos virtuozzolinux-base and openvz-os
>
> # yum repolist  |grep virt
> virtuozzolinux-baseVirtuozzoLinux Base15
> 415+189
> virtuozzolinux-updates VirtuozzoLinux
> Updates  0
>
> Jehan .
>
> ----------
> *De: *"jjs - mainphrame" 
> *À: *"OpenVZ users" 
> *Cc: *"Kevin Drysdale" 
> *Envoyé: *Jeudi 2 Juillet 2020 18:22:33
> *Objet: *Re: [Users] Issues after updating to 7.0.14 (136)
>
> Jehan, are you running factory?
>
> My ovz hosts are up to date, and I see:
>
> [root@annie ~]# cat /etc/virtuozzo-release
> OpenVZ release 7.0.15 (222)
>
> Jake
>
>
> On Thu, Jul 2, 2020 at 9:08 AM Jehan Procaccia IMT <
> jehan.procac...@imtbs-tsp.eu> wrote:
>
>> "updating to 7.0.14 (136)" !?
>>
>> I did an update yesterday , I am far behind that version
>>
>> *# cat /etc/vzlinux-release*
>> *Virtuozzo Linux release 7.8.0 (609)*
>>
>> *# uname -a *
>> *Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9 12:58:54 MSK
>> 2020 x86_64 x86_64 x86_64 GNU/Linux*
>>
>> why don't you try to update to latest version ?
>>
>>
>> Le 29/06/2020 à 12:30, Kevin Drysdale a écrit :
>>
>> Hello,
>>
>> After updating one of our OpenVZ VPS hosting nodes at the end of last
>> week, we've started to have issues with corruption apparently occurring
>> inside containers.  Issues of this nature have never affected the node
>> previously, and there do not appear to be any hardware issues that could
>> explain this.
>>
>> Specifically, a few hours after updating, we began to see containers
>> experiencing errors such as this in the logs:
>>
>> [90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25
>> [90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255:
>> ext4_ext_find_extent:904: inode 136399
>> [90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922:
>> ext4_ext_find_extent:904: inode 136399
>> [95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67
>> [95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174:
>> htree_dirblock_to_tree:918: inode 926441: block 3683060
>> [95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902:
>> ext4_iget:4435: inode 1849777
>> [95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42
>> [95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489:
>> ext4_ext_find_extent:904: inode 136272
>> [95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063:
>> ext4_ext_find_extent:904: inode 136272
>>
>> Shutting the containers down and manually mounting and e2fsck'ing their
>> filesystems did clear these errors, but each of the containers (which were
>> mostly used for running Plesk) had widespread issues with corrupt or
>> missing files after the fsck's completed, necessitating their being
>> restored from backup.
>>
>> Concurrently, we also began to see messages like this appearing in
>> /var/log/vzctl.log, which again have never appeared at any point prior to
>> this update being installed:
>>
>> /var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole
>> (check.c:240): Warning: ploop image '/vz/private/8288448/root.hdd/root.hds'
>> is sparse
>> /var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole
>> (check.c:240): Warning: ploop image '/vz/private/8288450/root.hdd/root.hds'
>> is sparse
>> /var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole
>> (check.c:240): Warning: ploop image '/vz/private/8288451/root.hdd/root.hds'
>> is sparse
>> /var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole
>> (check.c:240): Warning: ploop image '/vz/private/8288452/root.hdd/root.hds'
>> is sparse
>>
>> The basic procedure we follow when updating our nodes is as follows:
>>
>> 1, Update the standby node we keep spare for this process
>> 2. vzmigrate all containers from the live node being updated to the
>> stan

Re: [Users] Issues after updating to 7.0.14 (136)

2020-07-02 Thread jjs - mainphrame
Jehan, are you running factory?

My ovz hosts are up to date, and I see:

[root@annie ~]# cat /etc/virtuozzo-release
OpenVZ release 7.0.15 (222)

Jake


On Thu, Jul 2, 2020 at 9:08 AM Jehan Procaccia IMT <
jehan.procac...@imtbs-tsp.eu> wrote:

> "updating to 7.0.14 (136)" !?
>
> I did an update yesterday , I am far behind that version
>
> *# cat /etc/vzlinux-release*
> *Virtuozzo Linux release 7.8.0 (609)*
>
> *# uname -a *
> *Linux localhost 3.10.0-1127.8.2.vz7.151.14 #1 SMP Tue Jun 9 12:58:54 MSK
> 2020 x86_64 x86_64 x86_64 GNU/Linux*
>
> why don't you try to update to latest version ?
>
>
> Le 29/06/2020 à 12:30, Kevin Drysdale a écrit :
>
> Hello,
>
> After updating one of our OpenVZ VPS hosting nodes at the end of last
> week, we've started to have issues with corruption apparently occurring
> inside containers.  Issues of this nature have never affected the node
> previously, and there do not appear to be any hardware issues that could
> explain this.
>
> Specifically, a few hours after updating, we began to see containers
> experiencing errors such as this in the logs:
>
> [90471.678994] EXT4-fs (ploop35454p1): error count since last fsck: 25
> [90471.679022] EXT4-fs (ploop35454p1): initial error at time 1593205255:
> ext4_ext_find_extent:904: inode 136399
> [90471.679030] EXT4-fs (ploop35454p1): last error at time 1593232922:
> ext4_ext_find_extent:904: inode 136399
> [95189.954569] EXT4-fs (ploop42983p1): error count since last fsck: 67
> [95189.954582] EXT4-fs (ploop42983p1): initial error at time 1593210174:
> htree_dirblock_to_tree:918: inode 926441: block 3683060
> [95189.954589] EXT4-fs (ploop42983p1): last error at time 1593276902:
> ext4_iget:4435: inode 1849777
> [95714.207432] EXT4-fs (ploop60706p1): error count since last fsck: 42
> [95714.207447] EXT4-fs (ploop60706p1): initial error at time 1593210489:
> ext4_ext_find_extent:904: inode 136272
> [95714.207452] EXT4-fs (ploop60706p1): last error at time 1593231063:
> ext4_ext_find_extent:904: inode 136272
>
> Shutting the containers down and manually mounting and e2fsck'ing their
> filesystems did clear these errors, but each of the containers (which were
> mostly used for running Plesk) had widespread issues with corrupt or
> missing files after the fsck's completed, necessitating their being
> restored from backup.
>
> Concurrently, we also began to see messages like this appearing in
> /var/log/vzctl.log, which again have never appeared at any point prior to
> this update being installed:
>
> /var/log/vzctl.log:2020-06-26T21:05:19+0100 : Error in fill_hole
> (check.c:240): Warning: ploop image '/vz/private/8288448/root.hdd/root.hds'
> is sparse
> /var/log/vzctl.log:2020-06-26T21:09:41+0100 : Error in fill_hole
> (check.c:240): Warning: ploop image '/vz/private/8288450/root.hdd/root.hds'
> is sparse
> /var/log/vzctl.log:2020-06-26T21:16:22+0100 : Error in fill_hole
> (check.c:240): Warning: ploop image '/vz/private/8288451/root.hdd/root.hds'
> is sparse
> /var/log/vzctl.log:2020-06-26T21:19:57+0100 : Error in fill_hole
> (check.c:240): Warning: ploop image '/vz/private/8288452/root.hdd/root.hds'
> is sparse
>
> The basic procedure we follow when updating our nodes is as follows:
>
> 1, Update the standby node we keep spare for this process
> 2. vzmigrate all containers from the live node being updated to the
> standby node
> 3. Update the live node
> 4. Reboot the live node
> 5. vzmigrate the containers from the standby node back to the live node
> they originally came from
>
> So the only tool which has been used to affect these containers is
> 'vzmigrate' itself, so I'm at something of a loss as to how to explain the
> root.hdd images for these containers containing sparse gaps.  This is
> something we have never done, as we have always been aware that OpenVZ does
> not support their use inside a container's hard drive image.  And the fact
> that these images have suddenly become sparse at the same time they have
> started to exhibit filesystem corruption is somewhat concerning.
>
> We can restore all affected containers from backups, but I wanted to get
> in touch with the list to see if anyone else at any other site has
> experienced these or similar issues after applying the 7.0.14 (136) update.
>
> Thank you,
> Kevin Drysdale.
>
>
>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vzlunux 8

2020-05-04 Thread jjs - mainphrame
Looks like there are some rough edges. Will be watching for updates.

<...installing a package...>
warning:
/var/cache/dnf/virtuozzolinux-base-b1ad5fe4dfeb5b55/packages/libicu-60.3-2.vl8.x86_64.rpm:
Header V4 RSA/SHA1 Signature, key ID 1812f4d9: NOKEY
VirtuozzoLinux Base 471 kB/s | 2.0 kB
00:00
Importing GPG key 0x1812F4D9:
 Userid : "Virtuozzo Linux "
 Fingerprint: E1D0 8ACC 8DCE F9A3 3E93 086F 458D 0BA0 1812 F4D9
 From   : /etc/pki/rpm-gpg/VZLINUX_GPG_KEY
Is this ok [y/N]: y
Key imported successfully
VirtuozzoLinux Base 0.0  B/s |   0  B
00:00
Curl error (37): Couldn't read a file:// file for
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-7 [Couldn't open file
/etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-7]
The downloaded packages were saved in cache until the next successful
transaction.
You can remove cached packages by executing 'dnf clean packages'.


On Mon, May 4, 2020 at 10:32 AM jjs - mainphrame  wrote:

> I noticed there's a vzlinux 8 template available, so I downloaded it to
> have a look.
>
> That got me to thinking, since vzlinux 8 is a thing, could openvz 8 be
> close?
>
> Always ready and willing to test.
>
> Regards,
>
> jake
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] vzlunux 8

2020-05-04 Thread jjs - mainphrame
I noticed there's a vzlinux 8 template available, so I downloaded it to
have a look.

That got me to thinking, since vzlinux 8 is a thing, could openvz 8 be
close?

Always ready and willing to test.

Regards,

jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Forum account registration

2020-04-06 Thread jjs - mainphrame
Greetings admins and developers,

I would also like to request the creation of a forum account, per the
directive at
https://forum.openvz.org/index.php?t=msg=13585=0=ad74870617ba3e39065574a1252467c8

Thank you for your kind assistance.

J J sloan



On Mon, Apr 6, 2020 at 9:07 PM Vasily Averin  wrote:

> On 4/6/20 5:35 PM, Paulo Coghi - Coghi IT wrote:
> > Hello OpenVZ community,
> >
> > As per instructions here:
> >
> https://forum.openvz.org/index.php?t=msg=13585=0=ad74870617ba3e39065574a1252467c8
> >
> > I would like to ask the creation of an account.
>
> done
>
> Thank you,
> Vasily Averin
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] centos 7

2020-04-01 Thread jjs - mainphrame
It used to work, that's how I set up my openvz server at first. That was
the standard method: install centos, then install openvz.

But now, they have created a specialized distro called vzlnux. It's based
on centos 7, and looks and feels just like centos 7, but has the container
and virtualization hosting built in.

You have to install openvz linux now, if you want a new openvz server.

https://openvz.org/

Jake



On Wed, Apr 1, 2020 at 11:26 AM mattias  wrote:

> is this guide realy working?
>
>
> https://devopspoints.com/centos-7-setting-up-openvz-virtualization-on-centos-7.html
>
> litle to much deps problem with yum
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Openvz 8

2020-03-30 Thread jjs - mainphrame
Thanks, understood. I can tell you that It's highly anticipated.

Jake

On Mon, Mar 30, 2020 at 2:23 AM Vasily Averin  wrote:

> Dear Jake,
> thank you for your proposal,
> however we are not ready yet for wide OpenVz 8 testing, it is still under
> development.
>
> thank you,
> Vasily Averin
>
> On 3/29/20 11:47 PM, jjs - mainphrame wrote:
> > Bump -
> >
> > Still ready and willing to test & report
> >
> > jake
> >
> > On Tue, Oct 22, 2019 at 2:51 PM jjs - mainphrame  <mailto:j...@mainphrame.com>> wrote:
> >
> > Guys,
> >
> > If you ever want beta testers for OVZ 8, I just want you to know
> that I'm here for you.
> >
> > Thanks,
> >
> > Jake
> >
> >
> > ___
> > Users mailing list
> > Users@openvz.org
> > https://lists.openvz.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Openvz 8

2020-03-29 Thread jjs - mainphrame
Bump -

Still ready and willing to test & report

jake

On Tue, Oct 22, 2019 at 2:51 PM jjs - mainphrame  wrote:

> Guys,
>
> If you ever want beta testers for OVZ 8, I just want you to know that I'm
> here for you.
>
> Thanks,
>
> Jake
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Openvz 8

2019-10-22 Thread jjs - mainphrame
Guys,

If you ever want beta testers for OVZ 8, I just want you to know that I'm
here for you.

Thanks,

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Creating a VM

2019-07-02 Thread jjs - mainphrame
I'm very pleased with the performance of openvz VMs - just a quick test
spinning up identically spec'd centos 7 VMs under kvm, virtualbox and
openvz, and openvz was outperforming the others by a definitive margin,
Kudos to the developers.

Jake

On Tue, Jul 2, 2019 at 2:49 PM jjs - mainphrame  wrote:

> Yes, that's it - thanks for the quick response.
>
> Jake
>
> On Tue, Jul 2, 2019 at 2:42 PM Arjit Chaudhary  wrote:
>
>> I think you would need to use,
>>
>> --device-add cdrom {--device  | --image }
>> [--iface ] [--subtype ]
>> [--passthr] [--position ]
>>
>> So I think to mount the ISO image
>>
>> prlctl set c7-vm1 --device-add cdrom --image /path/to/iso
>>
>> If you have to specify "--device device"as well, then I'd suggest
>> checking the cd-rom device from,
>>  virsh edit c7-vm1
>>
>> could be hda or hdc
>>
>> On Wed, Jul 3, 2019 at 2:18 AM jjs - mainphrame 
>> wrote:
>>
>>> I've long been using openvz for running containers. Now I'm looking into
>>> running VMs.
>>>
>>> I've looked through the docs and can't find a description of how to
>>> install an OS into a VM, once created.
>>>
>>> I created a centos 7 vm with:
>>> # prlctl create c7-vm1 --distribution centos7 --vmtype vm
>>>
>>> I set the remote access with:
>>> # prlctl set c7-vm1 --vnc-mode manual --vnc-port 59000 --vnc-passwd
>>> xx
>>>
>>> But I don't see how to launch an installer, as I would in e.g. kvm. I
>>> can see the vm booting up through the vnc connection, and it apparently
>>> can't find the iso image sitting in the vmprivate/vm-id directory.
>>>
>>> I'd like to buy a clue, please.
>>>
>>> jake
>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>>
>>
>> --
>> Thanks,
>> Arjit Chaudhary
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Creating a VM

2019-07-02 Thread jjs - mainphrame
Yes, that's it - thanks for the quick response.

Jake

On Tue, Jul 2, 2019 at 2:42 PM Arjit Chaudhary  wrote:

> I think you would need to use,
>
> --device-add cdrom {--device  | --image }
> [--iface ] [--subtype ]
> [--passthr] [--position ]
>
> So I think to mount the ISO image
>
> prlctl set c7-vm1 --device-add cdrom --image /path/to/iso
>
> If you have to specify "--device device"as well, then I'd suggest checking
> the cd-rom device from,
>  virsh edit c7-vm1
>
> could be hda or hdc
>
> On Wed, Jul 3, 2019 at 2:18 AM jjs - mainphrame 
> wrote:
>
>> I've long been using openvz for running containers. Now I'm looking into
>> running VMs.
>>
>> I've looked through the docs and can't find a description of how to
>> install an OS into a VM, once created.
>>
>> I created a centos 7 vm with:
>> # prlctl create c7-vm1 --distribution centos7 --vmtype vm
>>
>> I set the remote access with:
>> # prlctl set c7-vm1 --vnc-mode manual --vnc-port 59000 --vnc-passwd xx
>>
>> But I don't see how to launch an installer, as I would in e.g. kvm. I can
>> see the vm booting up through the vnc connection, and it apparently can't
>> find the iso image sitting in the vmprivate/vm-id directory.
>>
>> I'd like to buy a clue, please.
>>
>> jake
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>
>
> --
> Thanks,
> Arjit Chaudhary
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Creating a VM

2019-07-02 Thread jjs - mainphrame
I've long been using openvz for running containers. Now I'm looking into
running VMs.

I've looked through the docs and can't find a description of how to install
an OS into a VM, once created.

I created a centos 7 vm with:
# prlctl create c7-vm1 --distribution centos7 --vmtype vm

I set the remote access with:
# prlctl set c7-vm1 --vnc-mode manual --vnc-port 59000 --vnc-passwd xx

But I don't see how to launch an installer, as I would in e.g. kvm. I can
see the vm booting up through the vnc connection, and it apparently can't
find the iso image sitting in the vmprivate/vm-id directory.

I'd like to buy a clue, please.

jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ/Virtuozzo 7 Repos Broken

2019-06-23 Thread jjs - mainphrame
Seems to be fixed this morning -

Jake

On Sat, Jun 22, 2019 at 8:24 AM Jonathan Wright 
wrote:

> The repos are still dead.  It would be great if someone from VZ could
> weigh in and let us know what's going on and when to expect a fix.
> On 6/21/19 3:43 PM, Jehan Procaccia wrote:
>
> I confirm , I cannot yum update on mine
>
> # cat /etc/openvz-release
> OpenVZ release 7.0.10 (254)
> # uname -a
> Linux olbia.int-evry.fr 3.10.0-957.12.2.vz7.86.2 #1 SMP Wed May 15
> 09:45:34 MSK 2019 x86_64 x86_64 x86_64 GNU/Linux
>
>
> # yum update
> Modules complémentaires chargés : fastestmirror, langpacks, openvz,
> priorities, product-id, refresh-packagekit, rhsm-auto-add-pools,
> : search-disabled-repos, vzlinux
> Loading mirror speeds from cached hostfile
>
>
>  One of the configured repositories failed (Inconnu),
>  and yum doesn't have enough cached data to continue. At this point the
> only
>  safe thing yum can do is fail. There are a few ways to work "fix" this:
>
>  1. Contact the upstream for the repository and get them to fix the
> problem.
>
>  2. Reconfigure the baseurl/etc. for the repository, to point to a
> working
> upstream. This is most often useful if you are using a newer
> distribution release than is supported by the repository (and the
> packages for the previous distribution release still work).
>
>  3. Run the command with the repository temporarily disabled
> yum --disablerepo= ...
>
>  4. Disable the repository permanently, so yum won't use it by
> default. Yum
> will then just ignore the repository until you permanently enable
> it
> again or use --enablerepo for temporary usage:
>
> yum-config-manager --disable 
> or
> subscription-manager repos --disable=
>
>  5. Configure the failing repository to be skipped, if it is
> unavailable.
> Note that yum will try to contact the repo. when it runs most
> commands,
> so will have to try and fail each time (and thus. yum will be be
> much
> slower). If it is a very temporary problem though, this is often a
> nice
> compromise:
>
> yum-config-manager --save
> --setopt=.skip_if_unavailable=true
>
> Cannot find a valid baseurl for repo: virtuozzolinux-base
>
> # yum repolist
> Modules complémentaires chargés : fastestmirror, langpacks, openvz,
> priorities, product-id, refresh-packagekit, rhsm-auto-add-pools,
> : search-disabled-repos, vzlinux
> Loading mirror speeds from cached hostfile
> Loading mirror speeds from cached hostfile
> Loading mirror speeds from cached hostfile
> Loading mirror speeds from cached hostfile
> Loading mirror speeds from cached hostfile
> Loading mirror speeds from cached hostfile
> Loading mirror speeds from cached hostfile
> Loading mirror speeds from cached hostfile
> id du dépôt nom du
> dépôt  statut
> openvz-os
> OpenVZ0
> openvz-updates  OpenVZ
> Updates0
> virtuozzolinux-base VirtuozzoLinux
> Base   0
> virtuozzolinux-updates  VirtuozzoLinux
> Updates0
> repolist: 0
>
>
> Le 21/06/2019 à 20:07, jjs - mainphrame a écrit :
>
> Seeing the same here. I suspect it may be related to the package signing
> issue I saw yesterday. I suspect it will be cleared up before too long.
>
> Jake
>
> On Fri, Jun 21, 2019 at 9:22 AM Jonathan Wright 
> wrote:
>
>> Something has broken the vz7 repos:
>>
>> # yum upgrade
>> Loaded plugins: fastestmirror, langpacks, openvz, priorities, vzlinux
>> Determining fastest mirrors
>>   * openvz-os: mirrors.evowise.com
>>   * openvz-updates: mirrors.evowise.com
>> openvz-os | 3.9 kB  00:00:00
>> openvz-updates | 3.1 kB  00:00:00
>> virtuozzolinux-base |  785 B  00:00:00
>> virtuozzolinux-updates | 2.9 kB  00:00:00
>> (1/4): virtuozzolinux-updates/primary_db | 1.1 kB  00:00:00
>> (2/4): openvz-os/group_gz |  18 kB  00:00:00
>> (3/4): openvz-updates/primary_db | 882 kB  00:00:00
>> (4/4): openvz-os/primary_db | 987 kB  00:00:00
>> Error: requested datatype primary not available
>>
>> Seeing this across all of my ovz/vz7 servers.
>>
>> --
>> Jonathan 

Re: [Users] OpenVZ/Virtuozzo 7 Repos Broken

2019-06-21 Thread jjs - mainphrame
Seeing the same here. I suspect it may be related to the package signing
issue I saw yesterday. I suspect it will be cleared up before too long.

Jake

On Fri, Jun 21, 2019 at 9:22 AM Jonathan Wright 
wrote:

> Something has broken the vz7 repos:
>
> # yum upgrade
> Loaded plugins: fastestmirror, langpacks, openvz, priorities, vzlinux
> Determining fastest mirrors
>   * openvz-os: mirrors.evowise.com
>   * openvz-updates: mirrors.evowise.com
> openvz-os | 3.9 kB  00:00:00
> openvz-updates | 3.1 kB  00:00:00
> virtuozzolinux-base |  785 B  00:00:00
> virtuozzolinux-updates | 2.9 kB  00:00:00
> (1/4): virtuozzolinux-updates/primary_db | 1.1 kB  00:00:00
> (2/4): openvz-os/group_gz |  18 kB  00:00:00
> (3/4): openvz-updates/primary_db | 882 kB  00:00:00
> (4/4): openvz-os/primary_db | 987 kB  00:00:00
> Error: requested datatype primary not available
>
> Seeing this across all of my ovz/vz7 servers.
>
> --
> Jonathan Wright
> KnownHost, LLC
> https://www.knownhost.com
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] New and interesting problem with vzlinux CT

2019-06-21 Thread jjs - mainphrame
Thanks guys -

Jake

On Fri, Jun 21, 2019 at 5:52 AM Denis Silakov 
wrote:

> ... resigned package has been sent to repos, should appear on
> repo.virtuozzo.com CDN in 1-2 hours.
>
>
> On 06/21/2019 01:54 PM, Denis Silakov wrote:
> > This package is indeed signed by Vz7 key, not VzLinux7 one. Will fix
> soon.
> >
> >
> > On 06/21/2019 09:10 AM, Vasily Averin wrote:
> >> Dear Jake,
> >> thank you for reporting the problem,
> >>
> >> It looks strange for me too,
> >> I do not see any difference between
> readykernel-scan-0.9-1.vl7.noarch.rpm and any other packages.
> >> I can only assume the packet was probably somehow corrupted during
> transmission
> >>
> >> Could you please re-check signature of downloaded packet?
> >>
> >> # rpmkeys -v -K readykernel-scan-0.9-1.vl7.noarch.rpm
> >> readykernel-scan-0.9-1.vl7.noarch.rpm:
> >>   Header V4 RSA/SHA1 Signature, key ID 44cdad2a: OK
> >>   Header SHA1 digest: OK (8ec92a2583e1f9113c192862f35cf47409b44fb5)
> >>   V4 RSA/SHA1 Signature, key ID 44cdad2a: OK
> >>   MD5 digest: OK (35e1bda4562bb30c5c6b6f97678440b6)
> >> # rpmkeys -v -K readykernel-scan-0.8-1.vl7.noarch.rpm
> >> readykernel-scan-0.8-1.vl7.noarch.rpm:
> >>   Header V4 RSA/SHA1 Signature, key ID 44cdad2a: OK
> >>   Header SHA1 digest: OK (c88b5c3ade303cf941f6589b5fba555bbb89fafc)
> >>   V4 RSA/SHA1 Signature, key ID 44cdad2a: OK
> >>   MD5 digest: OK (8c930bb8a80c80b2135bdabf3cec1d20)
> >>
> >> Thank you,
> >>  Vasily Averin
> >>
> >>
> >> On 6/20/19 9:10 PM, jjs - mainphrame wrote:
> >>> I was doing a yum update on a vzlinux CT, and at the end, I saw this:
> >>> Very odd, since this key is exactly the same one as the host, and
> other vzlinux CTs.
> >>>
> >>> <... many lines snipped...>
> >>> (187/188): xz-5.2.2-1.vl7.x86_64.rpm | 228 kB
>  00:00:01
> >>> (188/188): zstd-1.4.0-1.vl7.x86_64.rpm   | 358 kB
>  00:00:00
> >>>
> 
> >>> Total  2.1 MB/s | 104
> MB  00:49
> >>> warning:
> /var/cache/yum/x86_64/7/virtuozzolinux-base/packages/readykernel-scan-0.9-1.vl7.noarch.rpm:
> Header V4 RSA/SHA1 Signature, key ID 44cdad2a: NOKEY
> >>> Retrieving key from file:///etc/pki/rpm-gpg/VZLINUX_GPG_KEY
> >>>
> >>>
> >>> The GPG keys listed for the "VirtuozzoLinux-7 - Base" repository are
> already installed but they are not correct for this package.
> >>> Check that the correct key URLs are configured for this repository.
> >>>
> >>>
> >>>Failing package is: readykernel-scan-0.9-1.vl7.noarch
> >>>GPG Keys are configured as: file:///etc/pki/rpm-gpg/VZLINUX_GPG_KEY
> >>>
> >>> CT-1548 #
> >>>
> >>> Jake
> >>>
> >>> ___
> >>> Users mailing list
> >>> Users@openvz.org
> >>> https://lists.openvz.org/mailman/listinfo/users
> >>>
> >> ___
> >> Users mailing list
> >> Users@openvz.org
> >> https://lists.openvz.org/mailman/listinfo/users
>
> --
> Regards,
>
> Denis Silakov | Sr. Software Architect, Virtuozzo Linux Team Lead
> Otradnaya street 2B/9, “Otradnoye” Business Center | Moscow | Russia
> Phone: +7 916-222-9437 | dsila...@virtuozzo.com
> Skype: denis.silakov
>
> Virtuozzo.com
>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] New and interesting problem with vzlinux CT

2019-06-20 Thread jjs - mainphrame
I was doing a yum update on a vzlinux CT, and at the end, I saw this:
Very odd, since this key is exactly the same one as the host, and other
vzlinux CTs.

<... many lines snipped...>
(187/188): xz-5.2.2-1.vl7.x86_64.rpm | 228 kB
 00:00:01
(188/188): zstd-1.4.0-1.vl7.x86_64.rpm   | 358 kB
 00:00:00

Total  2.1 MB/s | 104 MB
 00:49
warning:
/var/cache/yum/x86_64/7/virtuozzolinux-base/packages/readykernel-scan-0.9-1.vl7.noarch.rpm:
Header V4 RSA/SHA1 Signature, key ID 44cdad2a: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/VZLINUX_GPG_KEY


The GPG keys listed for the "VirtuozzoLinux-7 - Base" repository are
already installed but they are not correct for this package.
Check that the correct key URLs are configured for this repository.


 Failing package is: readykernel-scan-0.9-1.vl7.noarch
 GPG Keys are configured as: file:///etc/pki/rpm-gpg/VZLINUX_GPG_KEY

CT-1548 #

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Persistent failure of live migration

2019-06-17 Thread jjs - mainphrame
Thank you for the follow-up. It seems the requirements for live migration
have been tightened considerably.

We used to live migrate between different architectures, but now both sides
of a live migration must be exactly identical. Almost makes one want to go
back to the openvz 6 days.

I'll work on finding an exact clone of my primary OVZ server and try again.

Thanks for the prompt response.

Jake

On Sun, Jun 16, 2019 at 4:38 AM Konstantin Khorenko 
wrote:

> On 06/15/2019 08:21 PM, jjs - mainphrame wrote:
> > Greetings -
> >
> > Live migration, which worked beautifully with openvz 10 years ago,  has
> stopped working in the current openvz 7 environment.
> >
> > When I first built ovz 7 servers a few years ago, live migration worked
> as it should. Within the past few months it stopped working. Thinking it
> might be a problem with tighter
> > requirements on CPU matching, I repurposed the amd based ovz host and
> replaced it, so that both ovz hosts would be running on intel hardware.
> >
> > Unfortunately that did not change the issue. Note below -  both hosts
> running up to date ovz-7 non-factory.
> >
> > The command -
> > [root@annie ~]# vzmigrate --online --nodeps hachi 1987
> > Connecttion to destination node (hachi) is successfully established
> > Moving/copying CT 1987 -> CT 1987, [], [] ...
> > locking 1987
> > Checking bindmounts
> > Check cluster ID
> > Checking keep dir for private area copy
> > Checking technologies
> > Checking IP addresses on destination node
> > Checking RATE parameters in config
> > Checking ploop format 2
> > copy CT private /vz/private/1987
> > Live migration stage started
> > Compression is enabled
> > Phaul service failed to live migrate CT
> > Phaul failed to live migrate CT (/var/log/phaul.log)
> > Can't move/copy CT 1987 -> CT 1987, [], [] : Phaul failed to live
> migrate CT (/var/log/phaul.log)
> > unlocking 1987
> > [root@annie ~]#
> >
> > Contents of phaul.log -
> > --
> > 10:01:16.569: 17149:
> > 10:01:16.569: 17149:
> > 10:01:16.570: 17149:
> > 10:01:16.570: 17149: Starting p.haul
> > 10:01:16.570: 17149: Use existing connections, fdrpc=11 fdmem=13
> fdfs=root.hdd/root.hds:15
> > 10:01:16.589: 17149: Setting up local
> > 10:01:16.589: 17149: Loading config file from /etc/vz/conf/
> > 10:01:16.590: 17149: Initialize ploop hauler
> > 10:01:16.590: 17149: `- /vz/private/1987/root.hdd/root.hds
> > 10:01:16.616: 17149: Passing (ctl:12, data:10) pair to CRIU
> > 10:01:16.616: 17149: Set maximum number of open file descriptors to
> 1048576
> > 10:01:16.618: 17149: Setting up remote
> > 10:01:16.704: 17149: Start migration in live mode
> > 10:01:16.704: 17149: Checking criu version
> > 10:01:16.757: 17149: Checking for Dirty Tracking
> > 10:01:16.758: 17149: `- Explicitly enabled
> > 10:01:16.758: 17149: Preliminary FS migration
> > 10:01:34.968: 17149: Fs driver transfer 1327497216 bytes (~1266Mb)
> > 10:01:34.968: 17149: * Iteration 0
> > 10:01:35.074: 17149: Making directory
> /vz/dump/1987/dmp-VTSUxn-19.06.15-10.01/img/1
> > 10:01:35.075: 17149: Issuing pre-dump command to service
> > 10:01:36.120: 17149: Dumped 28561 pages, 0 skipped
> > 10:01:36.120: 17149: Fs driver transfer 0 bytes
> > 10:01:36.120: 17149: Checking iteration progress:
> > 10:01:36.120: 17149: > Proceed to next iteration
> > 10:01:36.120: 17149: * Iteration 1
> > 10:01:36.122: 17149: Making directory
> /vz/dump/1987/dmp-VTSUxn-19.06.15-10.01/img/2
> > 10:01:36.122: 17149: Issuing pre-dump command to service
> > 10:01:37.751: 17149: Dumped 360 pages, 28201 skipped
> > 10:01:37.751: 17149: Fs driver transfer 0 bytes
> > 10:01:37.751: 17149: Checking iteration progress:
> > 10:01:37.751: 17149: > Proceed to next iteration
> > 10:01:37.751: 17149: * Iteration 2
> > 10:01:37.754: 17149: Making directory
> /vz/dump/1987/dmp-VTSUxn-19.06.15-10.01/img/3
> > 10:01:37.754: 17149: Issuing pre-dump command to service
> > 10:01:38.485: 17149: Dumped 361 pages, 28200 skipped
> > 10:01:38.485: 17149: Fs driver transfer 0 bytes
> > 10:01:38.485: 17149: Checking iteration progress:
> > 10:01:38.485: 17149: > Too many iterations
> > 10:01:38.485: 17149: Final dump and restore
> > 10:01:38.487: 17149: Making directory
> /vz/dump/1987/dmp-VTSUxn-19.06.15-10.01/img/4
> > 10:01:38.545: 17149: Issuing dump command to service
> > 10:01:38.547: 17149: Notify (pre-dump)
> > 10:01:38.555: 17149: Notify (network-lock)
> > 10:01:38.579: 17149: Action script
> /usr/libexec/criu/scripts/nfs-ports-allow.sh fini

[Users] Persistent failure of live migration

2019-06-15 Thread jjs - mainphrame
Greetings -

Live migration, which worked beautifully with openvz 10 years ago,  has
stopped working in the current openvz 7 environment.

When I first built ovz 7 servers a few years ago, live migration worked as
it should. Within the past few months it stopped working. Thinking it might
be a problem with tighter requirements on CPU matching, I repurposed the
amd based ovz host and replaced it, so that both ovz hosts would be running
on intel hardware.

Unfortunately that did not change the issue. Note below -  both hosts
running up to date ovz-7 non-factory.

The command -
[root@annie ~]# vzmigrate --online --nodeps hachi 1987
Connecttion to destination node (hachi) is successfully established
Moving/copying CT 1987 -> CT 1987, [], [] ...
locking 1987
Checking bindmounts
Check cluster ID
Checking keep dir for private area copy
Checking technologies
Checking IP addresses on destination node
Checking RATE parameters in config
Checking ploop format 2
copy CT private /vz/private/1987
Live migration stage started
Compression is enabled
Phaul service failed to live migrate CT
Phaul failed to live migrate CT (/var/log/phaul.log)
Can't move/copy CT 1987 -> CT 1987, [], [] : Phaul failed to live migrate
CT (/var/log/phaul.log)
unlocking 1987
[root@annie ~]#

Contents of phaul.log -
--
10:01:16.569: 17149:
10:01:16.569: 17149:
10:01:16.570: 17149:
10:01:16.570: 17149: Starting p.haul
10:01:16.570: 17149: Use existing connections, fdrpc=11 fdmem=13
fdfs=root.hdd/root.hds:15
10:01:16.589: 17149: Setting up local
10:01:16.589: 17149: Loading config file from /etc/vz/conf/
10:01:16.590: 17149: Initialize ploop hauler
10:01:16.590: 17149: `- /vz/private/1987/root.hdd/root.hds
10:01:16.616: 17149: Passing (ctl:12, data:10) pair to CRIU
10:01:16.616: 17149: Set maximum number of open file descriptors to 1048576
10:01:16.618: 17149: Setting up remote
10:01:16.704: 17149: Start migration in live mode
10:01:16.704: 17149: Checking criu version
10:01:16.757: 17149: Checking for Dirty Tracking
10:01:16.758: 17149: `- Explicitly enabled
10:01:16.758: 17149: Preliminary FS migration
10:01:34.968: 17149: Fs driver transfer 1327497216 bytes (~1266Mb)
10:01:34.968: 17149: * Iteration 0
10:01:35.074: 17149: Making directory
/vz/dump/1987/dmp-VTSUxn-19.06.15-10.01/img/1
10:01:35.075: 17149: Issuing pre-dump command to service
10:01:36.120: 17149: Dumped 28561 pages, 0 skipped
10:01:36.120: 17149: Fs driver transfer 0 bytes
10:01:36.120: 17149: Checking iteration progress:
10:01:36.120: 17149: > Proceed to next iteration
10:01:36.120: 17149: * Iteration 1
10:01:36.122: 17149: Making directory
/vz/dump/1987/dmp-VTSUxn-19.06.15-10.01/img/2
10:01:36.122: 17149: Issuing pre-dump command to service
10:01:37.751: 17149: Dumped 360 pages, 28201 skipped
10:01:37.751: 17149: Fs driver transfer 0 bytes
10:01:37.751: 17149: Checking iteration progress:
10:01:37.751: 17149: > Proceed to next iteration
10:01:37.751: 17149: * Iteration 2
10:01:37.754: 17149: Making directory
/vz/dump/1987/dmp-VTSUxn-19.06.15-10.01/img/3
10:01:37.754: 17149: Issuing pre-dump command to service
10:01:38.485: 17149: Dumped 361 pages, 28200 skipped
10:01:38.485: 17149: Fs driver transfer 0 bytes
10:01:38.485: 17149: Checking iteration progress:
10:01:38.485: 17149: > Too many iterations
10:01:38.485: 17149: Final dump and restore
10:01:38.487: 17149: Making directory
/vz/dump/1987/dmp-VTSUxn-19.06.15-10.01/img/4
10:01:38.545: 17149: Issuing dump command to service
10:01:38.547: 17149: Notify (pre-dump)
10:01:38.555: 17149: Notify (network-lock)
10:01:38.579: 17149: Action script
/usr/libexec/criu/scripts/nfs-ports-allow.sh finished with exit code 0
10:01:38.580: 17149: Notify (post-network-lock)
10:01:41.047: 17149: Final FS and images sync
10:01:41.441: 17149: Sending images to target
10:01:41.442: 17149: Pack
10:01:41.493: 17149: Add htype images
10:01:41.722: 17149: Asking target host to restore
10:01:42.635: 17149: Remote exception
10:01:42.636: 17149: I/O operation on closed file
Traceback (most recent call last):
  File "/usr/libexec/phaul/p.haul", line 9, in 
load_entry_point('phaul==0.1', 'console_scripts', 'p.haul')()
  File "/usr/lib/python2.7/site-packages/phaul/shell/phaul_client.py", line
49, in main
worker.start_migration()
  File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 161, in
start_migration
self.__start_live_migration()
  File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 232, in
__start_live_migration
self.target_host.restore_from_images()
  File "/usr/lib/python2.7/site-packages/phaul/xem_rpc_client.py", line 26,
in __call__
raise Exception(resp[1])
Exception: I/O operation on closed file
--

Dump directory contents are also available -

Shall I open a bug report?

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Changing System Time from container under OpenVZ 7

2019-06-02 Thread jjs - mainphrame
Greetings -

I'm happy to report that the feature "time:on" works exactly as hoped for
in a CT

Jake


On Tue, May 28, 2019 at 10:07 AM jjs - mainphrame 
wrote:

> Thanks for this -
>
> I had a hardware failure and am down to one openvz server, so I don't want
> to rock the boat. When the new box arrives this week, I'll migrate the CTs,
> boot to the new kernel, try the sys_time capability and report back.
>
> Jake
>
> On Tue, May 28, 2019 at 2:57 AM Konstantin Khorenko <
> khore...@virtuozzo.com> wrote:
>
>> On 05/21/2019 12:25 PM, Konstantin Khorenko wrote:
>> > Hi Jake, Ming,
>> >
>> > i've created a feature request to reintroduce sys_time capability:
>> > https://bugs.openvz.org/browse/OVZ-7096
>> >
>> > It does not look too complicated, so hope it will be implemented.
>> >
>> > BTW, feel free to contribute. :)
>> > IT worked in OpenVZ 6, need just to check the code of kernel and vzctl
>> and port the patches. :)
>>
>> JFYI: the feature is ready (documentation is TBD),
>> so you can use the feature, all you need is to update vzkernel and
>> libvzctl from factory repo:
>>vzkernel-3.10.0-957.12.2.vz7.96.6
>>libvzctl-7.0.521
>>
>> https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/
>> https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/l/
>>
>> Hope that helps.
>>
>> --
>> Best regards,
>>
>> Konstantin Khorenko,
>> Virtuozzo Linux Kernel Team
>>
>>
>> > On 05/20/2019 09:28 PM, jjs - mainphrame wrote:
>> >> Hi Konstantin,
>> >>
>> >> Just wanted to follow up on the removal of capability sys_time -
>> >>
>> >> This change is rather disappointing, because it means that virtuozzo
>> containers can no longer be used in the role they played in the past and
>> the management will likely just order
>> >> them replaced by VMware instances when it's time for hardware refresh.
>> >>
>> >> Sure, it's not optimal to have multiple containers potentially
>> adjusting the system time, but even if its a kludge, it was a very useful
>> one.
>> >>
>> >> Jake
>> >>
>> >>
>> >>
>> >> On Thu, May 16, 2019 at 8:59 AM jjs - mainphrame > <mailto:j...@mainphrame.com>> wrote:
>> >>
>> >> Hi Konstantin,
>> >>
>> >> With commercial Virtuozzo, we deployed containers to various lans,
>> where each container served ntp, among other things.The host itself is
>> isolated.
>> >> Removal of this capability brings about a dead end scenario. Is
>> there some other way to accomplish the ntp server duties moving forward?
>> >>
>> >> Jake
>> >>
>> >>
>> >> On Thu, May 16, 2019 at 5:19 AM Konstantin Khorenko <
>> khore...@virtuozzo.com <mailto:khore...@virtuozzo.com>> wrote:
>> >>
>> >> On 05/16/2019 02:29 PM, Nethub Online - Ming wrote:
>> >> > Hi Konstantin,
>> >> >
>> >> > It is because we want to setup a NTP server in CT, and then
>> let other servers to update their system time via ntpdate from this CT.
>> However after install NTP server in
>> >> this CT, it
>> >> > is unable to start NTP service.
>> >>
>> >> Hi again,
>> >>
>> >> thank you, the usecase is understandable.
>> >> We usually just run ntp clients on all hosts and don't face
>> with time unsync in Containers,
>> >> it's only a matter of one-time configuration of ntp service...
>> >>
>> >> --
>> >> Best regards,
>> >>
>> >> Konstantin Khorenko,
>> >> Virtuozzo Linux Kernel Team
>> >>
>> >> > 於 2019年5月16日 下午5:25,"Konstantin Khorenko" <
>> khore...@virtuozzo.com <mailto:khore...@virtuozzo.com> > khore...@virtuozzo.com <mailto:khore...@virtuozzo.com>>>
>> >> 寫道:
>> >> >
>> >> > On 05/15/2019 05:11 AM, Nethub Online - Ming wrote:
>> >> > > Hi all,
>> >> > >
>> >> > > In OpenVZ 6 users guide (
>> https://download.openvz.org/doc/OpenVZ-Users-Guide.pdf ), it allows me
>> to change hardware node system time

Re: [Users] Changing System Time from container under OpenVZ 7

2019-05-28 Thread jjs - mainphrame
Thanks for this -

I had a hardware failure and am down to one openvz server, so I don't want
to rock the boat. When the new box arrives this week, I'll migrate the CTs,
boot to the new kernel, try the sys_time capability and report back.

Jake

On Tue, May 28, 2019 at 2:57 AM Konstantin Khorenko 
wrote:

> On 05/21/2019 12:25 PM, Konstantin Khorenko wrote:
> > Hi Jake, Ming,
> >
> > i've created a feature request to reintroduce sys_time capability:
> > https://bugs.openvz.org/browse/OVZ-7096
> >
> > It does not look too complicated, so hope it will be implemented.
> >
> > BTW, feel free to contribute. :)
> > IT worked in OpenVZ 6, need just to check the code of kernel and vzctl
> and port the patches. :)
>
> JFYI: the feature is ready (documentation is TBD),
> so you can use the feature, all you need is to update vzkernel and
> libvzctl from factory repo:
>vzkernel-3.10.0-957.12.2.vz7.96.6
>libvzctl-7.0.521
>
> https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/
> https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/l/
>
> Hope that helps.
>
> --
> Best regards,
>
> Konstantin Khorenko,
> Virtuozzo Linux Kernel Team
>
>
> > On 05/20/2019 09:28 PM, jjs - mainphrame wrote:
> >> Hi Konstantin,
> >>
> >> Just wanted to follow up on the removal of capability sys_time -
> >>
> >> This change is rather disappointing, because it means that virtuozzo
> containers can no longer be used in the role they played in the past and
> the management will likely just order
> >> them replaced by VMware instances when it's time for hardware refresh.
> >>
> >> Sure, it's not optimal to have multiple containers potentially
> adjusting the system time, but even if its a kludge, it was a very useful
> one.
> >>
> >> Jake
> >>
> >>
> >>
> >> On Thu, May 16, 2019 at 8:59 AM jjs - mainphrame  <mailto:j...@mainphrame.com>> wrote:
> >>
> >> Hi Konstantin,
> >>
> >> With commercial Virtuozzo, we deployed containers to various lans,
> where each container served ntp, among other things.The host itself is
> isolated.
> >> Removal of this capability brings about a dead end scenario. Is
> there some other way to accomplish the ntp server duties moving forward?
> >>
> >> Jake
> >>
> >>
> >> On Thu, May 16, 2019 at 5:19 AM Konstantin Khorenko <
> khore...@virtuozzo.com <mailto:khore...@virtuozzo.com>> wrote:
> >>
> >> On 05/16/2019 02:29 PM, Nethub Online - Ming wrote:
> >> > Hi Konstantin,
> >> >
> >> > It is because we want to setup a NTP server in CT, and then
> let other servers to update their system time via ntpdate from this CT.
> However after install NTP server in
> >> this CT, it
> >> > is unable to start NTP service.
> >>
> >> Hi again,
> >>
> >> thank you, the usecase is understandable.
> >> We usually just run ntp clients on all hosts and don't face
> with time unsync in Containers,
> >> it's only a matter of one-time configuration of ntp service...
> >>
> >> --
> >> Best regards,
> >>
> >> Konstantin Khorenko,
> >> Virtuozzo Linux Kernel Team
> >>
> >> > 於 2019年5月16日 下午5:25,"Konstantin Khorenko" <
> khore...@virtuozzo.com <mailto:khore...@virtuozzo.com>  khore...@virtuozzo.com <mailto:khore...@virtuozzo.com>>>
> >> 寫道:
> >> >
> >> > On 05/15/2019 05:11 AM, Nethub Online - Ming wrote:
> >> > > Hi all,
> >> > >
> >> > > In OpenVZ 6 users guide (
> https://download.openvz.org/doc/OpenVZ-Users-Guide.pdf ), it allows me to
> change hardware node system time from container by command
> >> "vzctl set 101
> >> > > --capability sys_time:on --save"
> >> > >
> >> > > However I found that there is no such parameter for
> vzctl or prlctl under OpenVZ 7, may I know how to change hardware node
> system time from CT under OpenVZ 7?
> >> >
> >> > Hi,
> >> >
> >> > no way at the moment and you are the first who ask for it.
> >>   

Re: [Users] Usage Statistics for OpenVZ 7

2019-05-26 Thread jjs - mainphrame
I just wanted to say this is very interesting info. Thanks for all the work.

Jake

On Fri, May 24, 2019 at 8:31 AM Denis Silakov 
wrote:

> Usage Statistics for OpenVZ 7
>
> Hi all,
>
> As most of you probably know, OpenVZ 7 sends some statistics to
> stats7.openvz.org to get developers know how the product is used (one can
> turn this statistics on/off by enabling/disabling disp-helper service).
>
> For legacy OpenVZ versions, these data is visualized at
> https://stats.openvz.org/.
>
> It took us some time to prepare a public site where everyone can observe
> anonimized statistics of OpenVZ 7 usage, since data collection process has
> been merged with commercial Virtuozzo 7 which is completely different from
> OpenVZ 6. But finally, we have launched it:
>
> http://stats7-web.openvz.org/
>
> Feel free to browse and provide feedback.
>
> In particular, let us know if you think that some data is sensitive and
> should be dropped.
>
> For those interested in source code, data collection scripts are located
> here:
> https://src.openvz.org/projects/OVZ/repos/disp-helper/browse
>
> Patches are welcome. If it is not clear how disp-helper works - we are
> ready to provide detailed description (and likely add it to the git:))
>
> Note that not all of the data collected by disp-helper is visualized on
> the site. We will add more info as times goes by, but if you want to get
> some information visualized in the first place - let us know, we will
> consider prioritizing your wishes.
>
> --
> Regards,
> Denis.
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Changing System Time from container under OpenVZ 7

2019-05-20 Thread jjs - mainphrame
Hi Konstantin,

Just wanted to follow up on the removal of capability sys_time -

This change is rather disappointing, because it means that virtuozzo
containers can no longer be used in the role they played in the past and
the management will likely just order them replaced by VMware instances
when it's time for hardware refresh.

Sure, it's not optimal to have multiple containers potentially adjusting
the system time, but even if its a kludge, it was a very useful one.

Jake



On Thu, May 16, 2019 at 8:59 AM jjs - mainphrame  wrote:

> Hi Konstantin,
>
> With commercial Virtuozzo, we deployed containers to various lans, where
> each container served ntp, among other things.The host itself is isolated.
> Removal of this capability brings about a dead end scenario. Is there some
> other way to accomplish the ntp server duties moving forward?
>
> Jake
>
>
> On Thu, May 16, 2019 at 5:19 AM Konstantin Khorenko <
> khore...@virtuozzo.com> wrote:
>
>> On 05/16/2019 02:29 PM, Nethub Online - Ming wrote:
>> > Hi Konstantin,
>> >
>> > It is because we want to setup a NTP server in CT, and then let other
>> servers to update their system time via ntpdate from this CT. However after
>> install NTP server in this CT, it
>> > is unable to start NTP service.
>>
>> Hi again,
>>
>> thank you, the usecase is understandable.
>> We usually just run ntp clients on all hosts and don't face with time
>> unsync in Containers,
>> it's only a matter of one-time configuration of ntp service...
>>
>> --
>> Best regards,
>>
>> Konstantin Khorenko,
>> Virtuozzo Linux Kernel Team
>>
>> > 於 2019年5月16日 下午5:25,"Konstantin Khorenko" > <mailto:khore...@virtuozzo.com>> 寫道:
>> >
>> > On 05/15/2019 05:11 AM, Nethub Online - Ming wrote:
>> > > Hi all,
>> > >
>> > > In OpenVZ 6 users guide (
>> https://download.openvz.org/doc/OpenVZ-Users-Guide.pdf ), it allows me
>> to change hardware node system time from container by command "vzctl set 101
>> > > --capability sys_time:on --save"
>> > >
>> > > However I found that there is no such parameter for vzctl or
>> prlctl under OpenVZ 7, may I know how to change hardware node system time
>> from CT under OpenVZ 7?
>> >
>> > Hi,
>> >
>> > no way at the moment and you are the first who ask for it.
>> > Why do you need it?
>> > What it your usecase?
>> >
>> > Thank you.
>> >
>> > --
>> > Best regards,
>> >
>> > Konstantin Khorenko,
>> > Virtuozzo Linux Kernel Team
>> >
>> > ___
>> > Users mailing list
>> > Users@openvz.org <mailto:Users@openvz.org>
>> > https://lists.openvz.org/mailman/listinfo/users
>> >
>> >
>> >
>> > ___
>> > Users mailing list
>> > Users@openvz.org
>> > https://lists.openvz.org/mailman/listinfo/users
>> >
>>
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ 7 orchestration

2019-05-18 Thread jjs - mainphrame
That's perfect. I wouldn't expect to get all the premium features in a free
tool. But it can be a foot in the door, to get them thinking seriously
about the fully supported commercial product.

Jake

On Sat, May 18, 2019 at 4:41 AM Paulo Coghi - Coghi IT 
wrote:

> I'm starting to develop an open source web GUI. The idea is to facilitate
> the adoption of the new OpenVZ (7) and help with scenarios like the
> mentioned one.
>
> But I'm not an employee of Virtuozzo and this web GUI, of course, will not
> offer features like distributed storage or automated backups, since we need
> to encourage and facilitate the transition to Virtuozzo Commercial version,
> in order to make the whole OpenVZ ecosystem sustainable.
>
>
> On Fri, May 17, 2019 at 6:55 PM jjs - mainphrame 
> wrote:
>
>> Hi Konstantin -
>>
>> That's commercial only, right? A student, or a sys admin learning the
>> technology in order to support future customers might find the pricing a
>> bit steep.
>>
>> Don't get me wrong, virtuozzo is great, but it's nice to have a
>> non-commercial option as well, to lower the bar to entry.
>>
>> Jake
>>
>> On Thu, May 16, 2019 at 6:28 AM Konstantin Khorenko <
>> khore...@virtuozzo.com> wrote:
>>
>>> On 05/16/2019 04:10 PM, Jehan Procaccia wrote:
>>> > Hello,
>>> >
>>> > we have the need to let users start/stop openvz7 containers
>>> >
>>> > we could probably create local scripts that allow users (from our ldap
>>> > directory, not anyone in the world !) with sudo to do that.
>>> >
>>> > but I know automation/orchestration tools exist out there (Kubernetes
>>> > for example ...)
>>> >
>>> > is there community tools, or openvz/viruozzo integrated tool, that
>>> would
>>> > allow us to delegate the management of containers to our users without
>>> > giving out privilege on the Host .
>>>
>>> For Virtuozzo 7 it's done via PowerPanel:
>>> https://docs.virtuozzo.com/virtuozzo_powerpanel_users_guide/index.html
>>>
>>> ==
>>> About Virtuozzo PowerPanel
>>> Virtuozzo PowerPanel provides an easy way for you to manage all your
>>> virtual machines and containers from one web panel.
>>>
>>> In Virtuozzo PowerPanel, you can:
>>>
>>> * start, stop, and reset your virtual machines and containers,
>>> * reinstall your containers,
>>> * change user passwords for your virtual machines and containers,
>>> * create, restore, and delete backups of your virtual machines and
>>> containers,
>>> * log in to your virtual machines and containers via VNC.
>>>
>>> --
>>> Best regards,
>>>
>>> Konstantin Khorenko,
>>> Virtuozzo Linux Kernel Team
>>>
>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ 7 orchestration

2019-05-17 Thread jjs - mainphrame
Hi Konstantin -

That's commercial only, right? A student, or a sys admin learning the
technology in order to support future customers might find the pricing a
bit steep.

Don't get me wrong, virtuozzo is great, but it's nice to have a
non-commercial option as well, to lower the bar to entry.

Jake

On Thu, May 16, 2019 at 6:28 AM Konstantin Khorenko 
wrote:

> On 05/16/2019 04:10 PM, Jehan Procaccia wrote:
> > Hello,
> >
> > we have the need to let users start/stop openvz7 containers
> >
> > we could probably create local scripts that allow users (from our ldap
> > directory, not anyone in the world !) with sudo to do that.
> >
> > but I know automation/orchestration tools exist out there (Kubernetes
> > for example ...)
> >
> > is there community tools, or openvz/viruozzo integrated tool, that would
> > allow us to delegate the management of containers to our users without
> > giving out privilege on the Host .
>
> For Virtuozzo 7 it's done via PowerPanel:
> https://docs.virtuozzo.com/virtuozzo_powerpanel_users_guide/index.html
>
> ==
> About Virtuozzo PowerPanel
> Virtuozzo PowerPanel provides an easy way for you to manage all your
> virtual machines and containers from one web panel.
>
> In Virtuozzo PowerPanel, you can:
>
> * start, stop, and reset your virtual machines and containers,
> * reinstall your containers,
> * change user passwords for your virtual machines and containers,
> * create, restore, and delete backups of your virtual machines and
> containers,
> * log in to your virtual machines and containers via VNC.
>
> --
> Best regards,
>
> Konstantin Khorenko,
> Virtuozzo Linux Kernel Team
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] odd issues with vzmigrate

2019-05-16 Thread jjs - mainphrame
I've noticed that live migration stopped working for me on OVZ 7 as well.
It used to work. heck, even in openvz 6 it worked.

I posted here after noticing the issue, and was advised that live migration
between intel and amd CPUs was problematic. So yesterday I bit the bullet,
and swapped my OVZ amd host for a genuine intel box. Sadly, even with intel
on both OVZ hosts, live migrate always fails. It still claims cpu mismatch,
and so I pass -f cpu, but it still crashes and burns. Dead migration still
works though.

Although both nodes are now running Intel, the CPUs are not identical. Is
this an issue?

Here are the CPUs on each node:

[root@hachi ~]# lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):2
On-line CPU(s) list:   0,1
Thread(s) per core:1
Core(s) per socket:2
Socket(s): 1
NUMA node(s):  1
Vendor ID: GenuineIntel
CPU family:6
Model: 23
Model name:Intel(R) Core(TM)2 Duo CPU E8400  @ 3.00GHz
Stepping:  10
CPU MHz:   2992.785
BogoMIPS:  5985.57
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  6144K
NUMA node0 CPU(s): 0,1
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf eagerfpu
pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave
lahf_lm tpr_shadow vnmi flexpriority dtherm
[root@hachi ~]#

[root@annie nagios]# cd
[root@annie ~]# cls

[root@annie ~]# lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):8
On-line CPU(s) list:   0-7
Thread(s) per core:2
Core(s) per socket:4
Socket(s): 1
NUMA node(s):  1
Vendor ID: GenuineIntel
CPU family:6
Model: 42
Model name:Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz
Stepping:  7
CPU MHz:   1599.768
CPU max MHz:   3800.
CPU min MHz:   1600.
BogoMIPS:  6784.18
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  256K
L3 cache:  8192K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx
est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt
tsc_deadline_timer aes xsave avx lahf_lm epb ssbd ibrs ibpb stibp
tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts
spec_ctrl intel_stibp flush_l1d
[root@annie ~]#

Jake






On Thu, May 16, 2019 at 1:21 PM Arjit Chaudhary  wrote:

> Oddly enough the issue is present on another server now, I have created a
> bug report for it now,
> https://bugs.openvz.org/projects/OVZ/issues/OVZ-7091?filter=allopenissues
>
> [root@de23 ~]# prlctl migrate 16764 192.168.0.2  --ssh "-o Port=22"
> --verbose 10
> Logging in
> server uuid={98c783aa-90eb-47c9-99ea-1838ae37124d}
> sessionid={4cc677b9-ec79-4bb6-9871-2458cbade840}
> The virtual machine found: 16764
> Migrate the CT 16764 on 192.168.0.2 ()
> security_level=0
> PrlCleanup::register_hook: 4afcdd40
> EVENT type=100030
> Migration started.
> EVENT type=100523
> Checking preconditions
> EVENT type=100031
> Migration cancelled!
>
> Failed to migrate the CT: Failed to migrate the Container. An internal
> error occurred when performing the operation. Try to migrate the Container
> again. If the problem persists, contact the Virtuozzo support team for
> assistance.
> resultCount: 0
> PrlCleanup::unregister_hook: 4afcdd40
> Logging off
>
>
> On Wed, May 15, 2019 at 10:23 PM Arjit Chaudhary 
> wrote:
>
>> re: prlctl migrate -- It was my mistake with the command,
>>
>> I was applying,
>> > prlctl migrate 192.168.0.10 5128
>>
>> Instead of,
>> > prlctl migrate 5128 192.168.0.10
>>
>> So now, prlctl migrate works fine, vzmigrate has this issue still with
>> the same output as before.
>>
>> On Mon, May 13, 2019 at 11:23 AM Vasily Averin  wrote:
>>
>>> Dear Arjit,
>>> it looks liek some bug for me,
>>> and I would like to advise you to submit bug into openvz bug tracker
>>> https://bugs.openvz.org/ on openVZ project
>>>
>>> thank you,
>>> Vasily Averin
>>>
>>> On 5/11/19 3:16 PM, Arjit Chaudhary wrote:
>>> > Hello,
>>> > I've been using vzmigrate without issues for a couple of years on VZ6,
>>> but on VZ7 I run into this odd issue,
>>> >
>>> > I am able to migrate from a newer kernel to a older kernel
>>> > BUT
>>> > I am unable to migrate from same kernel to same kernel?
>>> >
>>> 

Re: [Users] Changing System Time from container under OpenVZ 7

2019-05-16 Thread jjs - mainphrame
Hi Konstantin,

With commercial Virtuozzo, we deployed containers to various lans, where
each container served ntp, among other things.The host itself is isolated.
Removal of this capability brings about a dead end scenario. Is there some
other way to accomplish the ntp server duties moving forward?

Jake


On Thu, May 16, 2019 at 5:19 AM Konstantin Khorenko 
wrote:

> On 05/16/2019 02:29 PM, Nethub Online - Ming wrote:
> > Hi Konstantin,
> >
> > It is because we want to setup a NTP server in CT, and then let other
> servers to update their system time via ntpdate from this CT. However after
> install NTP server in this CT, it
> > is unable to start NTP service.
>
> Hi again,
>
> thank you, the usecase is understandable.
> We usually just run ntp clients on all hosts and don't face with time
> unsync in Containers,
> it's only a matter of one-time configuration of ntp service...
>
> --
> Best regards,
>
> Konstantin Khorenko,
> Virtuozzo Linux Kernel Team
>
> > 於 2019年5月16日 下午5:25,"Konstantin Khorenko"  > 寫道:
> >
> > On 05/15/2019 05:11 AM, Nethub Online - Ming wrote:
> > > Hi all,
> > >
> > > In OpenVZ 6 users guide (
> https://download.openvz.org/doc/OpenVZ-Users-Guide.pdf ), it allows me to
> change hardware node system time from container by command "vzctl set 101
> > > --capability sys_time:on --save"
> > >
> > > However I found that there is no such parameter for vzctl or
> prlctl under OpenVZ 7, may I know how to change hardware node system time
> from CT under OpenVZ 7?
> >
> > Hi,
> >
> > no way at the moment and you are the first who ask for it.
> > Why do you need it?
> > What it your usecase?
> >
> > Thank you.
> >
> > --
> > Best regards,
> >
> > Konstantin Khorenko,
> > Virtuozzo Linux Kernel Team
> >
> > ___
> > Users mailing list
> > Users@openvz.org 
> > https://lists.openvz.org/mailman/listinfo/users
> >
> >
> >
> > ___
> > Users mailing list
> > Users@openvz.org
> > https://lists.openvz.org/mailman/listinfo/users
> >
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] web console

2019-05-15 Thread jjs - mainphrame
That is great news -

Jake

On Wed, May 15, 2019 at 5:12 PM Paulo Coghi - Coghi IT 
wrote:

> I'm preparing a new one, to propose to the community.
>
> On Wed, May 15, 2019, 04:21 jjs - mainphrame  wrote:
>
>> Greetings -
>>
>> I was just reminded of the glory days of openvz 6, when there was a nice
>> web console. Is there anything like that for OVZ 7?
>>
>> Jake
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] web console

2019-05-14 Thread jjs - mainphrame
Greetings -

I was just reminded of the glory days of openvz 6, when there was a nice
web console. Is there anything like that for OVZ 7?

Jake
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vzmigrate --online used to work, now fails with "CPU mismatch"

2019-03-23 Thread jjs - mainphrame
Thanks, patched my local copies.

Jake

On Fri, Mar 22, 2019 at 4:29 PM Kirill Kolyshkin 
wrote:

> On Fri, 22 Mar 2019 at 13:16, Konstantin Khorenko 
> wrote:
>
>> On 03/22/2019 07:51 PM, jjs - mainphrame wrote:
>>
>> However, one is an Intel CPU, the other is AMD. Live migration of
>> containers between them had been working, for about 3 years, but now it
>> balks at "CPUs mismatch".
>>
>> You know, you are very lucky. We do face issues from time to time when
>> processes die after online migration and
>> the root cause appears in cpus difference.
>>
>> I wonder, is there some way to override the paranoia? Ideally, an admin
>> could say "yes, I understand the CPUs aren't identical, but do it anyway"
>>
>> Here you are:
>>
>> # man vzmigrate
>>
>>-f,
>> --nodeps[=[all][,cpu_check][,disk_space][,technologies][,license][,rate][,bindmount][,tem‐
>>plate_area_sync][,kernel_modules]]
>>   Ignore  an  absence of required package sets on destination
>> node.  To prevent CT against errors
>>   in filesystem due to absent template files, it will not be
>> started on  destination  node  after
>>   migration and must be started manually.
>>   Additional parameters:
>>   all - as is -f.
>>   cpu_check - to pass check of the cpu capabilities.
>>
>
> That's a pity that all this is written in such incomprehensible way...
> but it's Friday and so I took some time to fix this. Please see the
> attached patch.
>
>
>> --
>> Best regards,
>>
>> Konstantin Khorenko,
>> Virtuozzo Linux Kernel Team
>>
>>
>> Jake
>>
>> On Fri, Mar 22, 2019 at 9:08 AM jjs - mainphrame 
>> wrote:
>>
>>> The output on both hosts is "x86_64"
>>>
>>> Jake
>>>
>>> On Fri, Mar 22, 2019 at 1:32 AM Narcis Garcia 
>>> wrote:
>>>
>>>> What is the output of this command in both origin and destination hosts?
>>>> $ uname -m
>>>>
>>>>
>>>> El 21/3/19 a les 23:27, jjs - mainphrame ha escrit:
>>>> > Greetings -
>>>> >
>>>> > vzmigrate --online always worked reliably on my 2 openvz 7 servers,
>>>> but
>>>> > nowadays, vzmigrate fails, for all containers, every time.
>>>> >
>>>> > ((CPUs mismatch))) -
>>>> >
>>>> > Apologies if I missed a memo, but why has that only now become an
>>>> issue?
>>>> >
>>>> > [root@annie ~]# vzmigrate hachi 1989 --online
>>>> > Connection to destination node (hachi) is successfully established
>>>> > Moving/copying CT 1989 -> CT 1989, [], [] ...
>>>> > locking 1989
>>>> > Checking bindmounts
>>>> > Check cluster ID
>>>> > Checking keep dir for private area copy
>>>> > Check of requires kernel modules
>>>> > Checking technologies
>>>> > Checking IP addresses on destination node
>>>> > Checking RATE parameters in config
>>>> > Checking ploop format 2
>>>> > copy CT private /vz/private/1989
>>>> > Live migration stage started
>>>> > Compression is enabled
>>>> > Phaul service failed to live migrate CT
>>>> > Phaul failed to live migrate CT (/var/log/phaul.log)
>>>> > Can't move/copy CT 1989 -> CT 1989, [], [] : Phaul failed to live
>>>> > migrate CT (/var/log/phaul.log)
>>>> > unlocking 1989
>>>> > [root@annie ~]# tail /var/log/phaul.log
>>>> > load_entry_point('phaul==0.1', 'console_scripts', 'p.haul')()
>>>> >   File "/usr/lib/python2.7/site-packages/phaul/shell/phaul_client.py",
>>>> > line 49, in main
>>>> > worker.start_migration()
>>>> >   File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 161, in
>>>> > start_migration
>>>> > self.__start_live_migration()
>>>> >   File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 175, in
>>>> > __start_live_migration
>>>> > self.__validate_cpu()
>>>> >   File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 114, in
>>>> > __validate_cpu
>>>> > raise Exception("CPUs mismatch")
>>>> > Exception: CPUs mismatch
>>>> > [root@annie ~]#
>>>> >
>>>> > Regards,
>>>> >
>>>>
>>> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vzmigrate --online used to work, now fails with "CPU mismatch"

2019-03-22 Thread jjs - mainphrame
Ah, I should have looked harder. Thanks for the gentle heads-up.

Jake

On Fri, Mar 22, 2019 at 1:18 PM Konstantin Khorenko 
wrote:

> On 03/22/2019 07:51 PM, jjs - mainphrame wrote:
>
> However, one is an Intel CPU, the other is AMD. Live migration of
> containers between them had been working, for about 3 years, but now it
> balks at "CPUs mismatch".
>
> You know, you are very lucky. We do face issues from time to time when
> processes die after online migration and
> the root cause appears in cpus difference.
>
> I wonder, is there some way to override the paranoia? Ideally, an admin
> could say "yes, I understand the CPUs aren't identical, but do it anyway"
>
> Here you are:
>
> # man vzmigrate
>
>-f,
> --nodeps[=[all][,cpu_check][,disk_space][,technologies][,license][,rate][,bindmount][,tem‐
>plate_area_sync][,kernel_modules]]
>   Ignore  an  absence of required package sets on destination
> node.  To prevent CT against errors
>   in filesystem due to absent template files, it will not be
> started on  destination  node  after
>   migration and must be started manually.
>   Additional parameters:
>   all - as is -f.
>   cpu_check - to pass check of the cpu capabilities.
>
> --
> Best regards,
>
> Konstantin Khorenko,
> Virtuozzo Linux Kernel Team
>
>
> Jake
>
> On Fri, Mar 22, 2019 at 9:08 AM jjs - mainphrame 
> wrote:
>
>> The output on both hosts is "x86_64"
>>
>> Jake
>>
>> On Fri, Mar 22, 2019 at 1:32 AM Narcis Garcia 
>> wrote:
>>
>>> What is the output of this command in both origin and destination hosts?
>>> $ uname -m
>>>
>>>
>>> El 21/3/19 a les 23:27, jjs - mainphrame ha escrit:
>>> > Greetings -
>>> >
>>> > vzmigrate --online always worked reliably on my 2 openvz 7 servers, but
>>> > nowadays, vzmigrate fails, for all containers, every time.
>>> >
>>> > ((CPUs mismatch))) -
>>> >
>>> > Apologies if I missed a memo, but why has that only now become an
>>> issue?
>>> >
>>> > [root@annie ~]# vzmigrate hachi 1989 --online
>>> > Connection to destination node (hachi) is successfully established
>>> > Moving/copying CT 1989 -> CT 1989, [], [] ...
>>> > locking 1989
>>> > Checking bindmounts
>>> > Check cluster ID
>>> > Checking keep dir for private area copy
>>> > Check of requires kernel modules
>>> > Checking technologies
>>> > Checking IP addresses on destination node
>>> > Checking RATE parameters in config
>>> > Checking ploop format 2
>>> > copy CT private /vz/private/1989
>>> > Live migration stage started
>>> > Compression is enabled
>>> > Phaul service failed to live migrate CT
>>> > Phaul failed to live migrate CT (/var/log/phaul.log)
>>> > Can't move/copy CT 1989 -> CT 1989, [], [] : Phaul failed to live
>>> > migrate CT (/var/log/phaul.log)
>>> > unlocking 1989
>>> > [root@annie ~]# tail /var/log/phaul.log
>>> > load_entry_point('phaul==0.1', 'console_scripts', 'p.haul')()
>>> >   File "/usr/lib/python2.7/site-packages/phaul/shell/phaul_client.py",
>>> > line 49, in main
>>> > worker.start_migration()
>>> >   File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 161, in
>>> > start_migration
>>> > self.__start_live_migration()
>>> >   File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 175, in
>>> > __start_live_migration
>>> > self.__validate_cpu()
>>> >   File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 114, in
>>> > __validate_cpu
>>> > raise Exception("CPUs mismatch")
>>> > Exception: CPUs mismatch
>>> > [root@annie ~]#
>>> >
>>> > Regards,
>>> >
>>> > Jake
>>> >
>>> >
>>> > ___
>>> > Users mailing list
>>> > Users@openvz.org
>>> > https://lists.openvz.org/mailman/listinfo/users
>>> >
>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>>
>
> ___
> Users mailing 
> listUsers@openvz.orghttps://lists.openvz.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vzmigrate --online used to work, now fails with "CPU mismatch"

2019-03-22 Thread jjs - mainphrame
However, one is an Intel CPU, the other is AMD. Live migration of
containers between them had been working, for about 3 years, but now it
balks at "CPUs mismatch".

I wonder, is there some way to override the paranoia? Ideally, an admin
could say "yes, I understand the CPUs aren't identical, but do it anyway"

Jake

On Fri, Mar 22, 2019 at 9:08 AM jjs - mainphrame  wrote:

> The output on both hosts is "x86_64"
>
> Jake
>
> On Fri, Mar 22, 2019 at 1:32 AM Narcis Garcia 
> wrote:
>
>> What is the output of this command in both origin and destination hosts?
>> $ uname -m
>>
>>
>> El 21/3/19 a les 23:27, jjs - mainphrame ha escrit:
>> > Greetings -
>> >
>> > vzmigrate --online always worked reliably on my 2 openvz 7 servers, but
>> > nowadays, vzmigrate fails, for all containers, every time.
>> >
>> > ((CPUs mismatch))) -
>> >
>> > Apologies if I missed a memo, but why has that only now become an issue?
>> >
>> > [root@annie ~]# vzmigrate hachi 1989 --online
>> > Connection to destination node (hachi) is successfully established
>> > Moving/copying CT 1989 -> CT 1989, [], [] ...
>> > locking 1989
>> > Checking bindmounts
>> > Check cluster ID
>> > Checking keep dir for private area copy
>> > Check of requires kernel modules
>> > Checking technologies
>> > Checking IP addresses on destination node
>> > Checking RATE parameters in config
>> > Checking ploop format 2
>> > copy CT private /vz/private/1989
>> > Live migration stage started
>> > Compression is enabled
>> > Phaul service failed to live migrate CT
>> > Phaul failed to live migrate CT (/var/log/phaul.log)
>> > Can't move/copy CT 1989 -> CT 1989, [], [] : Phaul failed to live
>> > migrate CT (/var/log/phaul.log)
>> > unlocking 1989
>> > [root@annie ~]# tail /var/log/phaul.log
>> > load_entry_point('phaul==0.1', 'console_scripts', 'p.haul')()
>> >   File "/usr/lib/python2.7/site-packages/phaul/shell/phaul_client.py",
>> > line 49, in main
>> > worker.start_migration()
>> >   File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 161, in
>> > start_migration
>> > self.__start_live_migration()
>> >   File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 175, in
>> > __start_live_migration
>> > self.__validate_cpu()
>> >   File "/usr/lib/python2.7/site-packages/phaul/iters.py", line 114, in
>> > __validate_cpu
>> > raise Exception("CPUs mismatch")
>> > Exception: CPUs mismatch
>> > [root@annie ~]#
>> >
>> > Regards,
>> >
>> > Jake
>> >
>> >
>> > ___
>> > Users mailing list
>> > Users@openvz.org
>> > https://lists.openvz.org/mailman/listinfo/users
>> >
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


  1   2   3   >