Re: [Users] IPv6, openVZ7, CentOS7
Hi, It could be in br0 if you are bridging enp3s0f0 and/or enp3s0f1 Basically the interface which lists as UP and with the IPv4 information etc should be the right one in my experience. But do back-up original configuration files on the system, before editing just incase something goes wrong you can just copy them back and restart networking. On Thu, 15 Dec, 2022, 17:40 Oleksiy Tkachenko, wrote: > I have many) > # ip a > 1: lo: > 2: enp3s0f0: > 3: enp3s0f1: > 4: enp0s20f0u9u2c2: > 5: br0: > 6: virbr0: > 7: virbr0-nic: > 8: venet0: > 9: host-routed: > 11: veth72240bcc@if3: > 12: veth720bed05@if3: > 13: veth7290a381@if3: > 16: veth721813e6@if3: > 17: veth720a338a@if3: > 24: veth729a3460@if3: > 25: veth720c6518@if3: > 26: veth721bbb55@if3: > 27: veth72705632@if3: > 28: veth72b28dcd@if3: > > > чт, 15 груд. 2022 р. о 13:52 Arjit Chaudhary пише: > >> Is your interface name different by chance? >> >> Can you check, >> >> > ip a >> >> for the actual interface name and then update that interface >> configuration for the IPv6. >> >> On Thu, 15 Dec, 2022, 16:43 Oleksiy Tkachenko, >> wrote: >> >>> Thank you! >>> During node preparation I need to update >>> "/etc/sysconfig/network-scripts/ifcfg-ethX" - but there is no such file. >>> I also need to put "eth0" in "/etc/sysconfig/network" file - is it >>> actual now (when I have no ethX file)? >>> >>> I also need to "Add 'ipt_state' to IPTABLES and 'nf_conntrack_ipv6' to >>> IP6TABLES" in "/etc/vz.conf" - there is no such file also. >>> I have "/etc/vz/vz.conf" on my bare metal openvz7 but there is no >>> "IPTABLES" or "IP6TABLES" mention at all. >>> >>> >>> чт, 15 груд. 2022 р. о 02:03 Arjit Chaudhary пише: >>> >>>> (In my experience) >>>> >>>> Does the host-node have working IPv6 configured on it? If yes, >>>> >>>> Then you can just do, >>>> >>>> vzctl set --ipadd -- save >>>> >>>> and the container too will have working IPv6. >>>> >>>> If the host-node does not have working IPv6, then you can just >>>> configure any address of the IPv6 range on the host-node >>>> >>>> >>>> On Thu, Dec 15, 2022 at 4:15 AM UNLIM.SRV wrote: >>>> >>>>> Trying to setup IPv6 for CentOS7 CT. >>>>> Seems original documentation outdated a bit ( >>>>> https://wiki.openvz.org/IPv6). >>>>> Is there updated information? >>>>> >>>>> Thank you! >>>>> >>>>> >>>>> -- >>>>> Oleksiy >>>>> ___ >>>>> Users mailing list >>>>> Users@openvz.org >>>>> https://lists.openvz.org/mailman/listinfo/users >>>>> >>>> >>>> >>>> -- >>>> Thanks, >>>> Arjit Chaudhary >>>> ___ >>>> Users mailing list >>>> Users@openvz.org >>>> https://lists.openvz.org/mailman/listinfo/users >>>> >>> ___ >>> Users mailing list >>> Users@openvz.org >>> https://lists.openvz.org/mailman/listinfo/users >>> >> ___ >> Users mailing list >> Users@openvz.org >> https://lists.openvz.org/mailman/listinfo/users >> > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] IPv6, openVZ7, CentOS7
Is your interface name different by chance? Can you check, > ip a for the actual interface name and then update that interface configuration for the IPv6. On Thu, 15 Dec, 2022, 16:43 Oleksiy Tkachenko, wrote: > Thank you! > During node preparation I need to update > "/etc/sysconfig/network-scripts/ifcfg-ethX" - but there is no such file. > I also need to put "eth0" in "/etc/sysconfig/network" file - is it actual > now (when I have no ethX file)? > > I also need to "Add 'ipt_state' to IPTABLES and 'nf_conntrack_ipv6' to > IP6TABLES" in "/etc/vz.conf" - there is no such file also. > I have "/etc/vz/vz.conf" on my bare metal openvz7 but there is no > "IPTABLES" or "IP6TABLES" mention at all. > > > чт, 15 груд. 2022 р. о 02:03 Arjit Chaudhary пише: > >> (In my experience) >> >> Does the host-node have working IPv6 configured on it? If yes, >> >> Then you can just do, >> >> vzctl set --ipadd -- save >> >> and the container too will have working IPv6. >> >> If the host-node does not have working IPv6, then you can just configure >> any address of the IPv6 range on the host-node >> >> >> On Thu, Dec 15, 2022 at 4:15 AM UNLIM.SRV wrote: >> >>> Trying to setup IPv6 for CentOS7 CT. >>> Seems original documentation outdated a bit ( >>> https://wiki.openvz.org/IPv6). >>> Is there updated information? >>> >>> Thank you! >>> >>> >>> -- >>> Oleksiy >>> ___ >>> Users mailing list >>> Users@openvz.org >>> https://lists.openvz.org/mailman/listinfo/users >>> >> >> >> -- >> Thanks, >> Arjit Chaudhary >> ___ >> Users mailing list >> Users@openvz.org >> https://lists.openvz.org/mailman/listinfo/users >> > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] IPv6, openVZ7, CentOS7
(In my experience) Does the host-node have working IPv6 configured on it? If yes, Then you can just do, vzctl set --ipadd -- save and the container too will have working IPv6. If the host-node does not have working IPv6, then you can just configure any address of the IPv6 range on the host-node On Thu, Dec 15, 2022 at 4:15 AM UNLIM.SRV wrote: > Trying to setup IPv6 for CentOS7 CT. > Seems original documentation outdated a bit (https://wiki.openvz.org/IPv6 > ). > Is there updated information? > > Thank you! > > > -- > Oleksiy > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] rockylinux-8-x86_64-ez template availability ?
Hello, I was looking at the template list https://src.openvz.org/projects/OVZT and noticed that rockylinux-8-x86_64-ez is listed there now, I tried installing this RockyLinux 8 template too, but it does not install, [root@ovz7 ~]# yum install rockylinux-8-x86_64-ez Loaded plugins: fastestmirror, openvz, priorities, vzlinux Loading mirror speeds from cached hostfile * epel: mirror.lshiy.com * openvz-os: mirrors.sonic.net * openvz-updates: mirrors.sonic.net 913 packages excluded due to repository priority protections No package rockylinux-8-x86_64-ez available. Error: Nothing to do Is the command different to install the package? -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] VCMMD not starting on boot
Hello, in the last few days I ran into an issue with VCMMD on OpenVZ 7 after a reboot, I am unable to boot any VM, [root@de48 ~]# vzctl start 14107 Starting Container ... Mount image: /vz/private/14107/root.hdd Container is mounted Setting permissions for image=/vz/private/14107/root.hdd vcmmd: failed to register Container: Failed to connect to VCMMD service vcmmd: failed to unregister Container: Failed to connect to VCMMD service Unmount image: /vz/private/14107/root.hdd (190) Container is unmounted vcmmd: failed to unregister Container: Failed to connect to VCMMD service Failed to start the Container BUT if I issue the command mkdir /sys/fs/cgroup/memory/user.slice and then service vcmmd start VCMMD starts and runs fine, [root@de48 ~]# vzctl start 14107 Starting Container ... Mount image: /vz/private/14107/root.hdd Container is mounted Setting permissions for image=/vz/private/14107/root.hdd Setting permissions for image=/vz/private/14107/root.hdd Adding ip address(es): 192.168.0.100 Warning: distribution not specified default used /usr/libexec/libvzctl/dists/default Container start in progress... I have tried multiple kernels, factory kernels also! Currently I am on, [root@de48 ~]# uname -r 3.10.0-1160.31.1.vz7.181.10 [root@de48 ~]# The config file is quite basic too, [root@de48 conf]# cat 14107.conf PHYSPAGES="262144:262144" SWAPPAGES="0:0" DISABLED="no" VE_PRIVATE="/vz/private/$VEID" VEID="14107" UUID="a9fa54d5-8a8d-4337-8341-c09a881aa981" IP_ADDRESS="192.168.100" NETFILTER="full" NUMIPTENT="9223372036854775807:9223372036854775807" ONBOOT="yes" VE_ROOT="/vz/root/$VEID" any idea / hint what could be causing this issue? -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] Caching CentOS 6 Template on OpenVZ 7
That could just would actually, but was trying to see if it worked out with the vault repo incase I have to re-sync it sometime in the future on another VZ server, For example, Debian 7 had the same issue, but I edit url.map under /vz/template/conf/vztt to, $DEB_SERVER http://archive.debian.org and that allows me to cache the template. On Mon, Dec 21, 2020 at 11:16 PM Scott Dowdle wrote: > Greetings, > > - Original Message - > > I see now, so after updating >[...] > > I get " 14: Peer cert cannot be verified or peer cert invalid" > > I just rsync from a vault mirror to make my own local mirror... and did > the same for EPEL 6 as well. > > Let me know if you think that'd be helpful for you or not? > > TYL, > -- > Scott Dowdle > 704 Church Street > Belgrade, MT 59714 > (406)388-0827 [home] > (406)994-3931 [work] > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] Caching CentOS 6 Template on OpenVZ 7
Hi, I see now, so after updating /vz/template/centos/6/x86_64/config/os/default/mirrorlist with the vault URLs, $SW_SERVER/download/mirrors/centos-6 $SW_SERVER/download/mirrors/updates-released-ce6 https://vault.centos.org/6.10/os/x86_64/ https://vault.centos.org/6.10/updates/x86_64/ I get " 14: Peer cert cannot be verified or peer cert invalid" [root@home vztt]# vzpkg update cache centos-6-x86_64 Update OS template cache for centos-6-x86_64 template Cleaning repos: base0 base1 base2 base3 12 metadata files removed 0 sqlite files removed 0 metadata files removed base0 | 951 B 00:00 base0/filelists | 824 B 00:00 base0/primary | 1.3 kB 00:00 base0/other | 707 B 00:00 base1 | 951 B 00:00 base1/filelists | 3.0 kB 00:00 base1/primary | 2.2 kB 00:00 base1/other | 8.3 kB 00:00 Could not retrieve mirrorlist https://vault.centos.org/6.10/os/x86_64/ error was 14: Peer cert cannot be verified or peer cert invalid Traceback (most recent call last): File "/usr/share/vzyum/bin/yum", line 30, in File "/usr/share/vzyum/yum-cli/yummain.py", line 293, in user_main File "/usr/share/vzyum/yum-cli/yummain.py", line 144, in main File "/usr/share/vzyum/yum-cli/cli.py", line 442, in doCommands File "/usr/share/vzyum/yum-cli/yumcommands.py", line 561, in doCommand File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1474, in File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1466, in _getRepoXML File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1456, in _loadRepoXML File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1431, in _groupLoadRepoXML File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1249, in _commonLoadRepoXML File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1027, in _getFileRepoXML File "/usr/share/vzyum/lib/yum/yumRepo.py", line 834, in _getFile File "/usr/share/vzyum/lib/yum/yumRepo.py", line 519, in File "/usr/share/vzyum/lib/yum/yumRepo.py", line 514, in _getgrab File "/usr/share/vzyum/lib/yum/yumRepo.py", line 484, in _setupGrab File "/usr/share/vzyum/lib/yum/yumRepo.py", line 699, in File "/usr/share/vzyum/lib/yum/yumRepo.py", line 696, in _geturls File "/usr/share/vzyum/lib/yum/yumRepo.py", line 662, in _baseurlSetup File "/usr/share/vzyum/lib/yum/yumRepo.py", line 426, in check AttributeError: 'YumRepository' object has no attribute 'mirrorlistfn' Error: /usr/share/vzyum/bin/yum failed, exitcode=1 On Mon, Dec 21, 2020 at 9:44 PM Denis Silakov wrote: > Why did you think CE_SERVER will take any effect? I can see that we just > request CentOS for its mirrors: > > > https://src.openvz.org/projects/OVZT/repos/centos-6-x86_64-ez/browse/os_mirrorlist?at=dist-vz7-u16 > > So one should either replace these two lines with vault mirror or switch > template to baseurls (create "repositories" file instead of "mirrors" and > place repo URL(s) there). > -- > *From:* users-boun...@openvz.org on behalf of > Arjit Chaudhary > *Sent:* Monday, December 21, 2020 6:54 PM > *To:* OpenVZ users > *Subject:* Re: [Users] Caching CentOS 6 Template on OpenVZ 7 > > Hi, > I tried to add the URL into /vz/template/conf/vztt/url.map > > ie, > > $CE_SERVER https://vault.centos.org/6.10/os/x86_64/ > > and it still errors with the same. > > > > > > On Mon, Dec 21, 2020 at 9:11 PM Denis Silakov > wrote: > > Yes, a direct baseurl like https://vault.centos.org/6.10/os/x86_64/ > should work. > > Note however that vault doesn't seem to like massive downloads, be ready > for connection breaks etc. > -- > *From:* users-boun...@openvz.org on behalf of > Scott Dowdle > *Sent:* Monday, December 21, 2020 6:29 PM > *To:* OpenVZ users > *Subject:* Re: [Users] Caching CentOS 6 Template on OpenVZ 7 > > Greetings, > > - Original Message - > > I understand CentOS 6 is EOL, but for some reason, a few times it's > [..] > > I tried to cache the centos-6 template and it errors with, > > They have moved their repo to vault but if you update the URLs used to > reflect the change, it should w
Re: [Users] Caching CentOS 6 Template on OpenVZ 7
Hi, I tried to add the URL into /vz/template/conf/vztt/url.map ie, $CE_SERVER https://vault.centos.org/6.10/os/x86_64/ and it still errors with the same. On Mon, Dec 21, 2020 at 9:11 PM Denis Silakov wrote: > Yes, a direct baseurl like https://vault.centos.org/6.10/os/x86_64/ > should work. > > Note however that vault doesn't seem to like massive downloads, be ready > for connection breaks etc. > -- > *From:* users-boun...@openvz.org on behalf of > Scott Dowdle > *Sent:* Monday, December 21, 2020 6:29 PM > *To:* OpenVZ users > *Subject:* Re: [Users] Caching CentOS 6 Template on OpenVZ 7 > > Greetings, > > - Original Message - > > I understand CentOS 6 is EOL, but for some reason, a few times it's > [..] > > I tried to cache the centos-6 template and it errors with, > > They have moved their repo to vault but if you update the URLs used to > reflect the change, it should work. > > TYL, > -- > Scott Dowdle > 704 Church Street > Belgrade, MT 59714 > (406)388-0827 [home] > (406)994-3931 [work] > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] Caching CentOS 6 Template on OpenVZ 7
Hello, I understand CentOS 6 is EOL, but for some reason, a few times it's needed to test things out as many people still use it, I tried to cache the centos-6 template and it errors with, [root@home ~]# vzpkg update cache centos-6-x86_64 Update OS template cache for centos-6-x86_64 template Cleaning repos: base0 base1 base2 base3 12 metadata files removed 0 sqlite files removed 0 metadata files removed base0 | 951 B 00:00 base0/filelists | 824 B 00:00 base0/primary | 1.3 kB 00:00 base0/other | 707 B 00:00 base1 | 951 B 00:00 base1/filelists | 3.0 kB 00:00 base1/primary | 2.2 kB 00:00 base1/other | 8.3 kB 00:00 YumRepo Error: All mirror URLs are not using ftp, http[s] or file. Eg. Invalid release/repo/arch combination/ removing mirrorlist with no valid mirrors: /vz/template/centos/6/x86_64/pm/base2/mirrorlist.txt Traceback (most recent call last): File "/usr/share/vzyum/bin/yum", line 30, in File "/usr/share/vzyum/yum-cli/yummain.py", line 293, in user_main File "/usr/share/vzyum/yum-cli/yummain.py", line 144, in main File "/usr/share/vzyum/yum-cli/cli.py", line 442, in doCommands File "/usr/share/vzyum/yum-cli/yumcommands.py", line 561, in doCommand File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1474, in File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1466, in _getRepoXML File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1456, in _loadRepoXML File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1431, in _groupLoadRepoXML File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1249, in _commonLoadRepoXML File "/usr/share/vzyum/lib/yum/yumRepo.py", line 1027, in _getFileRepoXML File "/usr/share/vzyum/lib/yum/yumRepo.py", line 834, in _getFile File "/usr/share/vzyum/lib/yum/yumRepo.py", line 519, in File "/usr/share/vzyum/lib/yum/yumRepo.py", line 514, in _getgrab File "/usr/share/vzyum/lib/yum/yumRepo.py", line 484, in _setupGrab File "/usr/share/vzyum/lib/yum/yumRepo.py", line 699, in File "/usr/share/vzyum/lib/yum/yumRepo.py", line 696, in _geturls File "/usr/share/vzyum/lib/yum/yumRepo.py", line 662, in _baseurlSetup File "/usr/share/vzyum/lib/yum/yumRepo.py", line 426, in check AttributeError: 'YumRepository' object has no attribute 'mirrorlistfn' Error: /usr/share/vzyum/bin/yum failed, exitcode=1 any idea how to cache/add this in? -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] Ubuntu 20.04 Template
Hi, Might be a bit soon to ask, but would the Ubuntu 20.04 template would be supported under OpenVZ 7 ? -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] openvz 7 backups
perl-LockFile-Simple seems to be there in EPEL Repository for OpenVZ 7 / CentOS 7 On Tue, Mar 10, 2020 at 9:00 PM wrote: > The documentation is outdates. > > I find vzdump here > https://download.openvz.org/contrib/utils/vzdump/vzdump-1.2-4.noarch.rpm > > But it is looking for > > cstream > > perl-LockFile-Simple > > > > but i cant fin dit. > > Any hints > > > > Steffan > > > > *Van:* users-boun...@openvz.org *Namens *Paulo > Coghi - Coghi IT > *Verzonden:* woensdag 26 februari 2020 16:00 > *Aan:* OpenVZ users > *Onderwerp:* Re: [Users] openvz 7 backups > > > > I use vzdump, as described here: > https://wiki.openvz.org/Backup_of_a_running_container_with_vzdump > > I strongly suggest enabling LVM when starting a new host to allow the use > of LVM2 when creating the backups with zero downtime. > > Paulo Coghi > > > > On Wed, Feb 26, 2020 at 11:30 AM wrote: > > How do you backup? > i make several backups of my systems with rsnapshot. > > > > *Van:* users-boun...@openvz.org *Namens *Paulo > Coghi - Coghi IT > *Verzonden:* woensdag 26 februari 2020 15:03 > *Aan:* OpenVZ users > *Onderwerp:* Re: [Users] openvz 7 backups > > > > Never used. > > > > On Tue, Feb 25, 2020 at 11:44 AM wrote: > > Are there people working with: > > https://www.openvz-diff-backups.fr/ > > > > It looks like a tweak of rsnapshot that i use for my openvz6 containers > > > > Any experience? > > > > Thanxs > > > Steffan > > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > > _______ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] add centos8 container template to openvz7
I think yum install centos-8-x86_64-ez should have worked in this case, as it is listed/available in the release repository (ie, non-factory repository) to install the template and then vzpkg create cache centos-8 to cache the template On Mon, Jan 6, 2020 at 4:57 PM Jehan PROCACCIA wrote: > Hello , > > I am trying to run a centos8 container on my openvz 7 : Virtuozzo Linux > release 7.7 host . > initialy there was no centos8 template > so I installed centos-8 ez package from > https://download.openvz.org/virtuozzo/releases/7.0/x86_64/os/Packages/c/ > *# rpm -Uvh centos-8-x86_64-ez-7.0.0-5.vz7.noarch.rpm* > > and created the cache > *# vzpkg create cache centos-8* > *Creating OS template cache for centos-8 template* > *..* > *Complete!* > *OS template centos-8 cache was created* > > procedure read from > https://forum.openvz.org/index.php?t=rview=52133=12945 ... > > Now I do have centos-8-x86_64listed (which was not there before) > > [root@olbia ~]# vzpkg list -O --with-summary > centos-7-x86_64:Centos 7 (for AMD64/Intel EM64T) > Virtuozzo Template > *centos-8-x86_64:Centos 8 (for AMD64/Intel EM64T) > Virtuozzo Template* > centos-6-x86_64:Centos 6 (for AMD64/Intel EM64T) > Virtuozzo Template > debian-8.0-x86_64 :Debian 8.0 (for AMD64/Intel EM64T) > Virtuozzo Template > debian-8.0-x86_64-minimal :Debian 8.0 minimal (for AMD64/Intel > EM64T) Virtuozzo Template > ubuntu-16.04-x86_64:Ubuntu 16.04 (for AMD64/Intel EM64T) > Virtuozzo Template > ubuntu-14.04-x86_64:Ubuntu 14.04 (for AMD64/Intel EM64T) > Virtuozzo Template > vzlinux-7-x86_64 :VzLinux 7 (for AMD64/Intel EM64T) > Virtuozzo Template > > Is this the correct way (and complete) to add a new EZ template or is > there a better/shorter way ? > > Thanks . > > *Jehan PROCACCIA* > Ingénieur systèmes et réseaux > Membre du comité de pilotage REVE : > Réseau d’Évry Val d'Essonne et THD > +33160764436 > 9 rue Charles Fourier - 91011 Evry Cedex > *www.imt-bs.eu* <https://www.imt-bs.eu> - *www.telecom-sudparis.eu* > <https://www.telecom-sudparis.eu> > > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] Repairing a ploop image
So the fix for this involves, scp the private folder + image (root.hdd/hdds, DiskDescriptor.xml etc) to a OpenVZ 6 based node Then running ploop balloon discard, ploop balloon discard --automount /vz/private /$CT/root.hdd/DiskDescriptor.xml Then scp the image back to the OpenVZ 7 based node into a temporary folder Recreate the CT, adjust the resources (disk etc) mv the root.hdd/root.hdds file to overwrite the existing root.hdds file that got created by the recreate step so far so good. On Sat, Jan 4, 2020 at 12:27 AM Arjit Chaudhary wrote: > Seems like someone had this issue earlier too, > https://lists.openvz.org/pipermail/users/2017-August/007361.html > > even the compact utility with --automount doesnt help in this case > > On Fri, Jan 3, 2020 at 9:30 PM Skirmantas Juraška > wrote: > >> I think this is a bug in the newest released ploop package version. I >> have opened a bug report but no one is paying attention. You can try to >> downgrade the ploop package or move this container (ploop) to the older >> node. Container should work. >> >> 2020-01-03, pn, 17:36 Arjit Chaudhary rašė: >> >>> Hi >>> I've been trying to repair this ploop image >>> >>> [root@internal ~]# ploop check -F /vz/private/102/root.hdd/root.hds >>> Reopen rw /vz/private/102/root.hdd/root.hds >>> Opening delta /vz/private/102/root.hdd/root.hds >>> Error in range_build_rmap (balloon_util.c:542): Image corrupted: L2[38] >>> == 155797 (max=188741632) (2) >>> >>> I can't seem to find any message / information about fixing " Error in >>> range_build_rmap " on google either, is this a non-recoverable image ? >>> >>> -- >>> Thanks, >>> Arjit Chaudhary >>> ___ >>> Users mailing list >>> Users@openvz.org >>> https://lists.openvz.org/mailman/listinfo/users >>> >> ___ >> Users mailing list >> Users@openvz.org >> https://lists.openvz.org/mailman/listinfo/users >> > > > -- > Thanks, > Arjit Chaudhary > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] Repairing a ploop image
Seems like someone had this issue earlier too, https://lists.openvz.org/pipermail/users/2017-August/007361.html even the compact utility with --automount doesnt help in this case On Fri, Jan 3, 2020 at 9:30 PM Skirmantas Juraška wrote: > I think this is a bug in the newest released ploop package version. I have > opened a bug report but no one is paying attention. You can try to > downgrade the ploop package or move this container (ploop) to the older > node. Container should work. > > 2020-01-03, pn, 17:36 Arjit Chaudhary rašė: > >> Hi >> I've been trying to repair this ploop image >> >> [root@internal ~]# ploop check -F /vz/private/102/root.hdd/root.hds >> Reopen rw /vz/private/102/root.hdd/root.hds >> Opening delta /vz/private/102/root.hdd/root.hds >> Error in range_build_rmap (balloon_util.c:542): Image corrupted: L2[38] >> == 155797 (max=188741632) (2) >> >> I can't seem to find any message / information about fixing " Error in >> range_build_rmap " on google either, is this a non-recoverable image ? >> >> -- >> Thanks, >> Arjit Chaudhary >> ___ >> Users mailing list >> Users@openvz.org >> https://lists.openvz.org/mailman/listinfo/users >> > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] Repairing a ploop image
Hi I've been trying to repair this ploop image [root@internal ~]# ploop check -F /vz/private/102/root.hdd/root.hds Reopen rw /vz/private/102/root.hdd/root.hds Opening delta /vz/private/102/root.hdd/root.hds Error in range_build_rmap (balloon_util.c:542): Image corrupted: L2[38] == 155797 (max=188741632) (2) I can't seem to find any message / information about fixing " Error in range_build_rmap " on google either, is this a non-recoverable image ? -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] OpenVZ 7 errors when starting a container Unable to clone: Too many open files
Hi, I've been facing this odd error in the past day, with > Unable to clone: Too many open files The containers just stop and do not start up again, currently running kernel 3.10.0-957.27.2.vz7.107.4 I've checked this out on google and wasn't able to find much on it, any help would be appreciated. [root@internal boot]# vzctl start 101 Starting Container ... Mount image: /vz/private/101/root.hdd Container is mounted Setting permissions for image=/vz/private/101/root.hdd Configure memguarantee: 0% Unable to clone: Too many open files Unmount image: /vz/private/101/root.hdd Container is unmounted Failed to start the Container -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] Getting Operation too slow errors on Virtuozzo Mirrors
Hi, I often see this error from my server in Germany, server is on a 100 Mbit/s connection with minimal usage, http://repo.virtuozzo.com/vzlinux/7/x86_64/os/Packages/p/perl-5.16.3-294.vl7.x86_64.rpm: [Errno 12] Timeout on http://repo.virtuozzo.com/vzlinux/7/x86_64/os/Packages/p/perl-5.16.3-294.vl7.x86_64.rpm: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. Is there any other recommended base mirror to use ? -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] Difference between swap and swappages?
Hi, > Is it for containers or the host? You can use vzubc to check it. The containers are around 1GB - 2GB RAM allocation with 2 - 4G swap each (Swap = 2x RAM) re: vzubc No package vzubc available. Error: Nothing to do On Wed, Dec 11, 2019 at 1:24 AM Kirill Kolyshkin wrote: > On Tue, 10 Dec 2019 at 11:30, Arjit Chaudhary wrote: > >> Hi, >> Aha so they're the same thing, was just curious as I've been noticing a >> bit of odd behavior on OpenVZ 7 host-nodes where the node has 128G RAM, >> around 30G used and it's swapping out 12G still instead of using the RAM as >> vSwap. Swappiness is set to 10 as well. >> > > Is it for containers or the host? You can use vzubc to check it. > > In general, it's normal for some swappiness to occur even in case there's > enough free RAM, but I agree that the figure of 12G looks a bit too big. > > >> >> I'm using the following commands to build the container if that help, >> >> /usr/sbin/vzctl create $VMID --ostemplate $DISTRO >> /usr/sbin/vzctl set $VMID --ipadd $VPS_IP --save >> /usr/sbin/vzctl set $VMID --hostname vps.server.com --save >> /usr/sbin/vzctl set $VMID --nameserver 8.8.8.8 --save >> /usr/sbin/vzctl set $VMID --diskspace >> $DISK_SPACE$DISK_VARIABLE:$DISK_SPACE$DISK_VARIABLE --save >> /usr/sbin/vzctl set $VMID --ram $RAM$RAM_VARIABLE --swappages >> $SWAP$SWAP_VARIABLE --save >> >> > Again, please read the man page and use --swap not --swappages. If you're > so inclined to use --swappages, please do it in a correct manner (barrier > should be set to 0 as described). > > >> >> >> On Wed, Dec 11, 2019 at 12:51 AM Kirill Kolyshkin >> wrote: >> >>> On Tue, 10 Dec 2019 at 10:42, Arjit Chaudhary wrote: >>> >>>> Hi, >>>> Is there any difference between setting vSwap on the OpenVZ 7 based >>>> virtual machine? >>>> >>>> vzctl set $VMID --ram $RAM --swap $SWAP --save >>>> >>>> vs >>>> >>>> vzctl set $VMID --ram $RAM --swappages $SWAP --save >>>> >>>> >>>> does --swap end up using real swap on the host-node >>>> >>>> and >>>> >>>> --swappages use the RAM as swap (ie, vswap) ? >>>> >>>> >>>> or are both the same ? >>>> >>> >>> Please take a look at vzctl man page, it's all described in there. In >>> short, you should use --swap; >>> --swappages is working but obsolete. >>> >>> >>> ___ >>> Users mailing list >>> Users@openvz.org >>> https://lists.openvz.org/mailman/listinfo/users >>> >> >> >> -- >> Thanks, >> Arjit Chaudhary >> ___ >> Users mailing list >> Users@openvz.org >> https://lists.openvz.org/mailman/listinfo/users >> > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] Difference between swap and swappages?
Hi, Aha so they're the same thing, was just curious as I've been noticing a bit of odd behavior on OpenVZ 7 host-nodes where the node has 128G RAM, around 30G used and it's swapping out 12G still instead of using the RAM as vSwap. Swappiness is set to 10 as well. I'm using the following commands to build the container if that help, /usr/sbin/vzctl create $VMID --ostemplate $DISTRO /usr/sbin/vzctl set $VMID --ipadd $VPS_IP --save /usr/sbin/vzctl set $VMID --hostname vps.server.com --save /usr/sbin/vzctl set $VMID --nameserver 8.8.8.8 --save /usr/sbin/vzctl set $VMID --diskspace $DISK_SPACE$DISK_VARIABLE:$DISK_SPACE$DISK_VARIABLE --save /usr/sbin/vzctl set $VMID --ram $RAM$RAM_VARIABLE --swappages $SWAP$SWAP_VARIABLE --save On Wed, Dec 11, 2019 at 12:51 AM Kirill Kolyshkin wrote: > On Tue, 10 Dec 2019 at 10:42, Arjit Chaudhary wrote: > >> Hi, >> Is there any difference between setting vSwap on the OpenVZ 7 based >> virtual machine? >> >> vzctl set $VMID --ram $RAM --swap $SWAP --save >> >> vs >> >> vzctl set $VMID --ram $RAM --swappages $SWAP --save >> >> >> does --swap end up using real swap on the host-node >> >> and >> >> --swappages use the RAM as swap (ie, vswap) ? >> >> >> or are both the same ? >> > > Please take a look at vzctl man page, it's all described in there. In > short, you should use --swap; > --swappages is working but obsolete. > > > _______ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] Difference between swap and swappages?
Hi, Is there any difference between setting vSwap on the OpenVZ 7 based virtual machine? vzctl set $VMID --ram $RAM --swap $SWAP --save vs vzctl set $VMID --ram $RAM --swappages $SWAP --save does --swap end up using real swap on the host-node and --swappages use the RAM as swap (ie, vswap) ? or are both the same ? -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] Enabling VNC Access to Containers
Hi, I came across this feature earlier today, https://docs.openvz.org/openvz_users_guide.webhelp/_enabling_vnc_access_to_containers.html It seems to be a very good option for providing users with a rescue/out-of-band console option in case they get locked out via a firewall setting etc I tried configuring it on a VPS (container) but it seems to error no matter what option is used, with auto mode, [root@dev ~]# prlctl set 100 --vnc-mode auto --vnc-port 6501 --vnc-passwd test123 Configure VNC: Remote display: mode=auto port=6501 Unable to commit CT configuration: Unable to start the VNC server in this virtual machine. Change the VNC server port number in the virtual machine configuration and try again. If the problem persists, contact the Virtuozzo support team for assistance. Failed to configure the virtual machine. and with manual mode as well, [root@dev ~]# prlctl set 100 --vnc-mode manual --vnc-port 6501 --vnc-passwd test123 Configure VNC: Remote display: mode=manual port=6501 Unable to commit CT configuration: Unable to start the VNC server in this virtual machine. Change the VNC server port number in the virtual machine configuration and try again. If the problem persists, contact the Virtuozzo support team for assistance. Failed to configure the virtual machine. Any help would be appreciated, port 6501 is open and unused as well. -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] Installing CentOS 5 Template on OpenVZ 7 ?
Hi, I see the template is available under the OpenVZ Template Repo, https://src.openvz.org/projects/OVZT/repos/centos-5-x86-ez/browse I understand OpenVZ 5 is End of Life but wanted to know if there's a way to install this template for some legacy applications. when I check "vzpkg list --available" it gives me, which lists no OpenVZ 5 (6 & 7 are installed already) fedora-23-x86_64 openvz-os sles-11-x86_64 openvz-os sles-12-x86_64 openvz-os sles-15-x86_64 openvz-os suse-42.1-x86_64 openvz-os suse-42.2-x86_64 openvz-os suse-42.3-x86_64 openvz-os ubuntu-17.10-x86_64openvz-os vzlinux-6-x86_64 openvz-os vzlinux-7-x86_64 openvz-os -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] Creating a VM
I think you would need to use, --device-add cdrom {--device | --image } [--iface ] [--subtype ] [--passthr] [--position ] So I think to mount the ISO image prlctl set c7-vm1 --device-add cdrom --image /path/to/iso If you have to specify "--device device"as well, then I'd suggest checking the cd-rom device from, virsh edit c7-vm1 could be hda or hdc On Wed, Jul 3, 2019 at 2:18 AM jjs - mainphrame wrote: > I've long been using openvz for running containers. Now I'm looking into > running VMs. > > I've looked through the docs and can't find a description of how to > install an OS into a VM, once created. > > I created a centos 7 vm with: > # prlctl create c7-vm1 --distribution centos7 --vmtype vm > > I set the remote access with: > # prlctl set c7-vm1 --vnc-mode manual --vnc-port 59000 --vnc-passwd xx > > But I don't see how to launch an installer, as I would in e.g. kvm. I can > see the vm booting up through the vnc connection, and it apparently can't > find the iso image sitting in the vmprivate/vm-id directory. > > I'd like to buy a clue, please. > > jake > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] after upgrade of vz7: Failed to get VM config: The virtual machine could not be found.
:47 Updated: selinux-policy-3.13.1-229.vl7.12.noarch > May 28 14:25:48 Updated: libvirt-bash-completion-4.5.0-10.vz7.10.1.x86_64 > May 28 14:25:48 Updated: libvirt-client-4.5.0-10.vz7.10.1.x86_64 > May 28 14:25:49 Updated: libvirt-4.5.0-10.vz7.10.1.x86_64 > May 28 14:25:53 Updated: vzkernel-headers-3.10.0-957.12.2.vz7.86.2.x86_64 > May 28 14:25:55 Updated: glibc-headers-2.17-260.vl7.5.x86_64 > May 28 14:25:56 Updated: glibc-devel-2.17-260.vl7.5.x86_64 > May 28 14:25:56 Updated: python-sssdconfig-1.16.2-13.vl7.8.noarch > May 28 14:25:56 Updated: sssd-1.16.2-13.vl7.8.x86_64 > May 28 14:25:59 Updated: gcc-4.8.5-36.vl7.2.x86_64 > May 28 14:26:01 Updated: prl-disp-service-7.0.942.2-1.vz7.x86_64 > May 28 14:26:20 Updated: selinux-policy-targeted-3.13.1-229.vl7.12.noarch > May 28 14:26:22 Updated: kernel-tools-3.10.0-957.12.2.vz7.86.2.x86_64 > May 28 14:26:24 Updated: httpd-2.4.6-89.vl7.x86_64 > May 28 14:26:24 Updated: libvirt-daemon-kvm-4.5.0-10.vz7.10.1.x86_64 > May 28 14:26:24 Updated: libvirt-daemon-driver-vz-4.5.0-10.vz7.10.1.x86_64 > May 28 14:26:26 Updated: 7:lvm2-2.02.180-10.vl7.7.x86_64 > May 28 14:26:28 Updated: systemd-python-219-63.vl7.7.x86_64 > May 28 14:26:37 Updated: 2:microcode_ctl-2.1-47.2.vl7.x86_64 > May 28 14:26:49 Installed: vzkernel-3.10.0-957.12.2.vz7.86.2.x86_64 > May 28 14:26:49 Updated: ploop-7.0.140.2-1.vz7.x86_64 > May 28 14:26:50 Updated: python-ploop-7.0.140.2-1.vz7.x86_64 > May 28 14:26:50 Updated: prlctl-7.0.173.2-1.vz7.x86_64 > May 28 14:26:51 Updated: prl-disp-legacy-7.0.942.2-1.vz7.x86_64 > May 28 14:26:51 Updated: libgudev1-219-63.vl7.7.x86_64 > May 28 14:26:52 Updated: pango-1.42.4-2.vl7.x86_64 > May 28 14:26:53 Updated: virt-what-1.18-5.vl7.x86_64 > May 28 14:26:53 Updated: wget-1.14-18.vl7.1.x86_64 > May 28 14:26:54 Updated: python2-psutil-2.2.1-6.vl7.x86_64 > May 28 14:26:56 Updated: python-perf-3.10.0-957.12.2.vz7.86.2.x86_64 > May 28 14:26:58 Updated: ghostscript-9.07-31.vl7.11.x86_64 > May 28 14:26:58 Updated: libzstd-1.4.0-1.vl7.x86_64 > May 28 14:26:59 Updated: 10:qemu-kvm-tools-vz-2.12.0-18.6.3.vz7.21.6.x86_64 > May 28 14:27:01 Updated: python-urllib3-1.19.1-3.vl7.noarch > May 28 14:27:02 Updated: prl-disp-service-tests-7.0.942.2-1.vz7.x86_64 > May 28 14:27:03 Updated: glibc-2.17-260.vl7.5.i686 > May 28 14:27:04 Updated: libgcc-4.8.5-36.vl7.2.i686 > May 28 14:27:04 Updated: libstdc++-4.8.5-36.vl7.2.i68634321706 > > Gr, J > > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] Containers list as online in vzlist but not under prlctl list
Hi, That path with the UUID under /vz/private/ doesn't seem to exist at all on my system, [root@box conf]# ls -l | grep 2108 lrwxrwxrwx 1 root root 24 May 7 12:07 2108.conf -> /vz/private/2108/ve.conf -rw-r--r-- 1 root root 1865 May 23 19:11 2108.conf.pre-vswap [root@box conf]# prlctl list -a 2108 UUIDSTATUS IP_ADDR T NAME {bb632556-4faf-4f36-9a64-f3f0d700c0da} stopped - CT 2108 [root@box conf]# ls -l /vz/private/bb632556-4faf-4f36-9a64-f3f0d700c0da/ ls: cannot access /vz/private/bb632556-4faf-4f36-9a64-f3f0d700c0da/: No such file or directory prlctl list -a lists 1 VPS in total as Online and rest as offline (when I check with prlctl list -a), while vzlist has them all online. But also, there is no UUID named/type folder in my /vz/private for any Container On Mon, May 27, 2019 at 5:42 PM John wrote: > This error probably is something in /etc/vz/conf > > Normally there is a symlink like: 74eb4c58-1325-484c-950c-a4084d6f51ff.conf > -> /vz/private/74eb4c58-1325-484c-950c-a4084d6f51ff/ve.conf > > You might have have 2108.conf as a file in /etc/vz/conf and need to > symlink 74eb4c58-1325-484c-950c-a4084d6f51ff to 2108.conf > -- > *From:* users-boun...@openvz.org on behalf of > Konstantin Khorenko > *Sent:* Monday, May 27, 2019 6:24:20 AM > *To:* OpenVZ users > *Cc:* Igor Sukhih > *Subject:* Re: [Users] Containers list as online in vzlist but not under > prlctl list > > On 05/27/2019 01:15 AM, Arjit Chaudhary wrote: > > Hello, > > Been facing this odd issue where the container lists as "stopped" under > prlctl list but lists as online under vzlist at the same time, for example, > > > > [root@box ~]# vzlist -a 2108 > > CTID NPROC STATUS > IP_ADDR HOSTNAME > > 2108100 running > 192.168.0.10- > > > > [root@box ~]# prlctl list 2108 > > UUIDSTATUS IP_ADDR T > NAME > > {74eb4c58-1325-484c-950c-a4084d6f51ff} stopped - CT > 2108 > > [root@box ~]# > > > > > > When I try to boot it via prlctl start, > > > > [root@box ~]# prlctl start 2108 > > Starting the CT... > > Failed to start the CT: The virtual machine could not be found. The > virtual machine is not registered in the virtual machine directory on this > server. Contact your Virtuozzo > > administrator for assistance. > > [root@box ~]# > > > > prlctl version 7.0.173.2 > > > > Any help would be very appreciated. > > Hi, > > please file a bug at bugs.openvz.org > and post additionally the following information: > > # prlctl list -i 2108 > > # cat /etc/vz/conf/2108.conf > > -- > Best regards, > > Konstantin Khorenko, > Virtuozzo Linux Kernel Team > > _______ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] Containers list as online in vzlist but not under prlctl list
Hello, Been facing this odd issue where the container lists as "stopped" under prlctl list but lists as online under vzlist at the same time, for example, [root@box ~]# vzlist -a 2108 CTID NPROC STATUSIP_ADDR HOSTNAME 2108100 running 192.168.0.10- [root@box ~]# prlctl list 2108 UUIDSTATUS IP_ADDR T NAME {74eb4c58-1325-484c-950c-a4084d6f51ff} stopped - CT 2108 [root@box ~]# When I try to boot it via prlctl start, [root@box ~]# prlctl start 2108 Starting the CT... Failed to start the CT: The virtual machine could not be found. The virtual machine is not registered in the virtual machine directory on this server. Contact your Virtuozzo administrator for assistance. [root@box ~]# prlctl version 7.0.173.2 Any help would be very appreciated. -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] prl_disp_service using a lot of RAM when migrating containers
Earlier today when migrating a few contianers I had a host-node lock up due to the service using a lot of RAM, [image: image.png] Can this be disabled on OpenVZ 7 or is it needed when migrating via prlctl migrate ? -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] odd issues with vzmigrate
Hi, I was able to resolve the prlctl migrate issue by removing the installed rsync (was a newer version from epel) and manually installing the one available from OpenVZs repository. Re: Live Migrate, On my side, I have the same issue with live migration, even if both the servers are 1:1 in configuration. Same Intel CPU, same RAM, same disks capacity and yet the issue persists. On Fri, 17 May 2019, 02:03 jjs - mainphrame, wrote: > I've noticed that live migration stopped working for me on OVZ 7 as well. > It used to work. heck, even in openvz 6 it worked. > > I posted here after noticing the issue, and was advised that live > migration between intel and amd CPUs was problematic. So yesterday I bit > the bullet, and swapped my OVZ amd host for a genuine intel box. Sadly, > even with intel on both OVZ hosts, live migrate always fails. It still > claims cpu mismatch, and so I pass -f cpu, but it still crashes and burns. > Dead migration still works though. > > Although both nodes are now running Intel, the CPUs are not identical. Is > this an issue? > > Here are the CPUs on each node: > > [root@hachi ~]# lscpu > Architecture: x86_64 > CPU op-mode(s):32-bit, 64-bit > Byte Order:Little Endian > CPU(s):2 > On-line CPU(s) list: 0,1 > Thread(s) per core:1 > Core(s) per socket:2 > Socket(s): 1 > NUMA node(s): 1 > Vendor ID: GenuineIntel > CPU family:6 > Model: 23 > Model name:Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz > Stepping: 10 > CPU MHz: 2992.785 > BogoMIPS: 5985.57 > Virtualization:VT-x > L1d cache: 32K > L1i cache: 32K > L2 cache: 6144K > NUMA node0 CPU(s): 0,1 > Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr > pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe > syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf > eagerfpu pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm > sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority dtherm > [root@hachi ~]# > > [root@annie nagios]# cd > [root@annie ~]# cls > > [root@annie ~]# lscpu > Architecture: x86_64 > CPU op-mode(s):32-bit, 64-bit > Byte Order:Little Endian > CPU(s):8 > On-line CPU(s) list: 0-7 > Thread(s) per core:2 > Core(s) per socket:4 > Socket(s): 1 > NUMA node(s): 1 > Vendor ID: GenuineIntel > CPU family:6 > Model: 42 > Model name:Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz > Stepping: 7 > CPU MHz: 1599.768 > CPU max MHz: 3800. > CPU min MHz: 1600. > BogoMIPS: 6784.18 > Virtualization:VT-x > L1d cache: 32K > L1i cache: 32K > L2 cache: 256K > L3 cache: 8192K > NUMA node0 CPU(s): 0-7 > Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr > pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe > syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl > xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor > ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic > popcnt tsc_deadline_timer aes xsave avx lahf_lm epb ssbd ibrs ibpb stibp > tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts > spec_ctrl intel_stibp flush_l1d > [root@annie ~]# > > Jake > > > > > > > On Thu, May 16, 2019 at 1:21 PM Arjit Chaudhary wrote: > >> Oddly enough the issue is present on another server now, I have created a >> bug report for it now, >> https://bugs.openvz.org/projects/OVZ/issues/OVZ-7091?filter=allopenissues >> >> [root@de23 ~]# prlctl migrate 16764 192.168.0.2 --ssh "-o Port=22" >> --verbose 10 >> Logging in >> server uuid={98c783aa-90eb-47c9-99ea-1838ae37124d} >> sessionid={4cc677b9-ec79-4bb6-9871-2458cbade840} >> The virtual machine found: 16764 >> Migrate the CT 16764 on 192.168.0.2 () >> security_level=0 >> PrlCleanup::register_hook: 4afcdd40 >> EVENT type=100030 >> Migration started. >> EVENT type=100523 >> Checking preconditions >> EVENT type=100031 >> Migration cancelled! >> >> Failed to migrate the CT: Failed to migrate the Container. An internal >> error occurred when performing the operation. Try to migrate the Container >> again. If the problem persists, contact the Virtuozzo support team f
Re: [Users] odd issues with vzmigrate
Oddly enough the issue is present on another server now, I have created a bug report for it now, https://bugs.openvz.org/projects/OVZ/issues/OVZ-7091?filter=allopenissues [root@de23 ~]# prlctl migrate 16764 192.168.0.2 --ssh "-o Port=22" --verbose 10 Logging in server uuid={98c783aa-90eb-47c9-99ea-1838ae37124d} sessionid={4cc677b9-ec79-4bb6-9871-2458cbade840} The virtual machine found: 16764 Migrate the CT 16764 on 192.168.0.2 () security_level=0 PrlCleanup::register_hook: 4afcdd40 EVENT type=100030 Migration started. EVENT type=100523 Checking preconditions EVENT type=100031 Migration cancelled! Failed to migrate the CT: Failed to migrate the Container. An internal error occurred when performing the operation. Try to migrate the Container again. If the problem persists, contact the Virtuozzo support team for assistance. resultCount: 0 PrlCleanup::unregister_hook: 4afcdd40 Logging off On Wed, May 15, 2019 at 10:23 PM Arjit Chaudhary wrote: > re: prlctl migrate -- It was my mistake with the command, > > I was applying, > > prlctl migrate 192.168.0.10 5128 > > Instead of, > > prlctl migrate 5128 192.168.0.10 > > So now, prlctl migrate works fine, vzmigrate has this issue still with the > same output as before. > > On Mon, May 13, 2019 at 11:23 AM Vasily Averin wrote: > >> Dear Arjit, >> it looks liek some bug for me, >> and I would like to advise you to submit bug into openvz bug tracker >> https://bugs.openvz.org/ on openVZ project >> >> thank you, >> Vasily Averin >> >> On 5/11/19 3:16 PM, Arjit Chaudhary wrote: >> > Hello, >> > I've been using vzmigrate without issues for a couple of years on VZ6, >> but on VZ7 I run into this odd issue, >> > >> > I am able to migrate from a newer kernel to a older kernel >> > BUT >> > I am unable to migrate from same kernel to same kernel? >> > >> > ie, >> > 3.10.0-957.10.1.vz7.85.17 --to--> 3.10.0-957.10.1.vz7.85.17 == Fail >> > >> > but, >> > 3.10.0-957.10.1.vz7.85.17 --to--> 3.10.0-862.20.2.vz7.73.29 == Success >> > >> > This is the error I get when I try to migrate to 192.168.0.10 which >> runs 3.10.0-957.10.1.vz7.85.17, >> > >> >> [root@source ~]# vzmigrate 192.168.0.10 5128 >> >> ssh exited with code 255 >> >> ssh wait daemon exited with code 1 >> >> vzsock_open() return 1 >> >> >> >> Can not create connection to 192.168.0.10 >> > >> > I am able to SSH into 192.168.0.10 via the command ssh >> root@192.168.0.10 <mailto:root@192.168.0.10> without any issue. >> > >> > I did try prlctl migrate but that too is returning an error, >> > >> >> [root@source ~]# prlctl migrate 192.168.0.10 5128 >> >> Failed to get VM config: The virtual machine could not be found. The >> virtual machine is > not registered in the virtual machine directory on >> this server. Contact your Virtuozzo administrator for assistance. >> > >> > prlctl version 7.0.173 >> > vzmigrate version 7.0.119-1.vz7 >> > >> > Any help would be appreciated on this. >> > >> > -- >> > Thanks, >> > Arjit Chaudhary >> > >> > ___ >> > Users mailing list >> > Users@openvz.org >> > https://lists.openvz.org/mailman/listinfo/users >> > >> ___ >> Users mailing list >> Users@openvz.org >> https://lists.openvz.org/mailman/listinfo/users >> > > > -- > Thanks, > Arjit Chaudhary > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
Re: [Users] odd issues with vzmigrate
re: prlctl migrate -- It was my mistake with the command, I was applying, > prlctl migrate 192.168.0.10 5128 Instead of, > prlctl migrate 5128 192.168.0.10 So now, prlctl migrate works fine, vzmigrate has this issue still with the same output as before. On Mon, May 13, 2019 at 11:23 AM Vasily Averin wrote: > Dear Arjit, > it looks liek some bug for me, > and I would like to advise you to submit bug into openvz bug tracker > https://bugs.openvz.org/ on openVZ project > > thank you, > Vasily Averin > > On 5/11/19 3:16 PM, Arjit Chaudhary wrote: > > Hello, > > I've been using vzmigrate without issues for a couple of years on VZ6, > but on VZ7 I run into this odd issue, > > > > I am able to migrate from a newer kernel to a older kernel > > BUT > > I am unable to migrate from same kernel to same kernel? > > > > ie, > > 3.10.0-957.10.1.vz7.85.17 --to--> 3.10.0-957.10.1.vz7.85.17 == Fail > > > > but, > > 3.10.0-957.10.1.vz7.85.17 --to--> 3.10.0-862.20.2.vz7.73.29 == Success > > > > This is the error I get when I try to migrate to 192.168.0.10 which runs > 3.10.0-957.10.1.vz7.85.17, > > > >> [root@source ~]# vzmigrate 192.168.0.10 5128 > >> ssh exited with code 255 > >> ssh wait daemon exited with code 1 > >> vzsock_open() return 1 > >> > >> Can not create connection to 192.168.0.10 > > > > I am able to SSH into 192.168.0.10 via the command ssh root@192.168.0.10 > <mailto:root@192.168.0.10> without any issue. > > > > I did try prlctl migrate but that too is returning an error, > > > >> [root@source ~]# prlctl migrate 192.168.0.10 5128 > >> Failed to get VM config: The virtual machine could not be found. The > virtual machine is > not registered in the virtual machine directory on > this server. Contact your Virtuozzo administrator for assistance. > > > > prlctl version 7.0.173 > > vzmigrate version 7.0.119-1.vz7 > > > > Any help would be appreciated on this. > > > > -- > > Thanks, > > Arjit Chaudhary > > > > ___ > > Users mailing list > > Users@openvz.org > > https://lists.openvz.org/mailman/listinfo/users > > > ___ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
[Users] odd issues with vzmigrate
Hello, I've been using vzmigrate without issues for a couple of years on VZ6, but on VZ7 I run into this odd issue, I am able to migrate from a newer kernel to a older kernel BUT I am unable to migrate from same kernel to same kernel? ie, 3.10.0-957.10.1.vz7.85.17 --to--> 3.10.0-957.10.1.vz7.85.17 == Fail but, 3.10.0-957.10.1.vz7.85.17 --to--> 3.10.0-862.20.2.vz7.73.29 == Success This is the error I get when I try to migrate to 192.168.0.10 which runs 3.10.0-957.10.1.vz7.85.17, > [root@source ~]# vzmigrate 192.168.0.10 5128 > ssh exited with code 255 > ssh wait daemon exited with code 1 > vzsock_open() return 1 > > Can not create connection to 192.168.0.10 I am able to SSH into 192.168.0.10 via the command ssh root@192.168.0.10 without any issue. I did try prlctl migrate but that too is returning an error, > [root@source ~]# prlctl migrate 192.168.0.10 5128 > Failed to get VM config: The virtual machine could not be found. The virtual machine is > not registered in the virtual machine directory on this server. Contact your Virtuozzo administrator for assistance. prlctl version 7.0.173 vzmigrate version 7.0.119-1.vz7 Any help would be appreciated on this. -- Thanks, Arjit Chaudhary ___ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users