OpenVZ 6 is last fully functional version, running on top of CentOS.
OpenVZ 7, 8, 9 ...
May be better to use just virtual machines using QEMU-KVM and libvirt ?
This solution is very stable, very feature rich and very useful.
If you need to use very cheap virtual machines - try to use
Hello All,
ploop snapshots are dangerous - they can lead to data loss
if partition there ploop file located has no free space.
Detailed information:
I use file-based backups technology for ploop backups.
https://wiki.openvz.org/Ploop/Backup#File-based_backup
Backup process:
# Take a snapshot
Hello, All!
What hardware is supported by OpenVZ 6 kernel?
I found FAQ answer:
https://wiki.openvz.org/Legacy_OpenVZ_FAQ#What_hardware_is_supported_by_OpenVZ_kernel.3F
See Virtuozzo HCL.
But this "See Virtuozzo HCL"
http://www.parallels.com/en/products/virtuozzo/hcl/
is broken link.
How
Hello, All!
# uname -a
Linux hardware-node 2.6.32-042stab126.1 #1 SMP Wed Nov 15 20:14:46 MSK
2017 x86_64 x86_64 x86_64 GNU/Linux
# grep NO_HZ /boot/config-$(uname -r)
CONFIG_NO_HZ=y
During backup of ploop containers via network:
https://openvz.org/Ploop/Backup#File-based_backup
I see in
On 30.05.2017 11:46, Vasily Averin wrote:
Dear OpenVZ users,
could you please share your feedback on Vz7?
Not usable for production, because regular updates is absent.
How do you perceive Virtuozzo VMs vs others (Oracle or KVM VMs) ?
KVM and Docker is free and stable alternative with
On 01.11.2016 22:52, Chris James wrote:
It doesn't seem to do anything if the container is running:
It works with running containers too.
pcompact do nothing if threshold is not exceeded.
/etc/vz/pcompact.conf
# Start compacting if unused space is greater than specified THRESHOLD
in
Hello, All!
File-based backup in OpenVZ 6:
https://openvz.org/Ploop/Backup#File-based_backup
File-based backup in OpenVZ 7:
not possible at all, it is broken now.
how to reproduce:
using prlctl
"prlctl snapshot-mount" and "prlctl snapshot-umount" not implemented,
file-based
Hello, All!
How to make https://openvz.org/Ploop/Backup#File-based_backup
using OpenVZ 7 and prlctl/ploop tools? (vzctl is deprecated.)
--
Best regards,
Gena
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
Hello, All!
In announce of OpenVZ 7.0 release exists section "Known Issues":
"vzctl will be obsoleted in next version of OpenVZ, consider switching
to prlctl or virsh."
how it is possible switch frop vzctl to prlctl, if prlctl does not
contain all functionality of vzctl?
For example:
Hello, All!
Script
https://src.openvz.org/projects/OVZL/repos/ovztransfer/browse/ovztransfer.sh
can transfer only old ploop -> new ploop containers,
or also it can transfer old simfs -> new ploop containers?
--
Best regards,
Gena
___
Users mailing
Hello, All!
openvz_readme.pdf:
"Limited simfs support (feature provided as is). (OVZ-6613)"
https://bugs.openvz.org/browse/OVZ-6613
"Need to remove simfs stub module"
https://bugs.openvz.org/browse/OVZ-6752
"Need to remove simfs stub module"
Status: Open
Priority: Major
Resolution:
On 06.06.2016 13:51, Sergey Bronnikov wrote:
Why do you prefer simfs instead of ploop?
simfs allow usage OpenVZ containers with native ZFS storage.
Did you see comparison simfs vs ploop?
https://openvz.org/CT_storage_backends
Did you see comparison simfs vs ploop?
On 29.03.2016 21:03, Karl Johnson wrote:
Every weekend I do backups of all CT which take a lot of IO. It didn't
affect much load average before 108 but as soon as I upgraded to 113, load
got very high and nodes became sluggish during backups. It might be
something else but I was looking for
On 25.12.2015 13:21, Gena Makhomed wrote:
Sorry, this bug is not related to OpenVZ.
Sorry, I am wrong, and looks like this bug related to OpenVZ.
Second container with CentOS 7.2 + nginx 1.9.9 from official repo
and some result - nginx is down after reboot and errors in log.
nginx
Hello, All!
Two events:
1) "Starting LSB: Bring up/down networking..."
2) "nginx: [emerg] bind() to 172.23.23.161:80 failed (99: Cannot assign
requested address)"
This is OpenVZ bug?
How to fix / workaround this bug?
# rpm -q nginx
nginx-1.9.9-1.el7.ngx.x86_64
# cat /etc/redhat-release
On 04.08.2015 17:54, Scott Dowdle wrote:
vzctl name means Virtuozzo control - very good and useful
name. same as vzkernel means Virtuozzo kernel - easy to
understand.
prlctl means unpronounceable name of six strange consonant
letters and now it without any sense and relation to the Virtuzzo
On 03.08.2015 17:09, Sergey Bronnikov wrote:
From source code point of view OpenVZ is an umbrella for OpenVZ legacy,
opensourced components from commercial Virtuozzo, CRIU
and other mini projects (like LibCT etc).
From people point of view OpenVZ is project which consolidates community,
On 03.08.2015 12:56, Sergey Bronnikov wrote:
we have published a part of Virtuzzo documentation on separate
site - http://docs.openvz.org/. And we will add more docs soon.
See:
On 03.08.2015 14:18, Sergey Bronnikov wrote:
we have published a part of Virtuzzo documentation on separate
site - http://docs.openvz.org/. And we will add more docs soon.
BTW, domain name virtuozzo.org is registered and currently it work
as redirect to web page
On 25.07.2015 4:46, Kir Kolyshkin wrote:
This tool is to be used for inner ploop ext4. As a result, the data will
be less sparse, there will be more empty blocks for ploop to discard.
I encourage you to experiment with e4defrag2 and post your results here.
Usage is something like this
On 23.07.2015 5:44, Kir Kolyshkin wrote:
My experience with ploop:
DISKSPACE limited to 256 GiB, real data used inside container
was near 40-50% of limit 256 GiB, but ploop image is lot bigger,
it use near 256 GiB of space at hardware node. Overhead ~ 50-60%
I found workaround for this: run
On 25.07.2015 1:06, Kir Kolyshkin wrote:
I think this is not good idea run ploop compaction more frequently,
then one time per day at the night - so we need take into account
not minimal value of overhead, but maximal one, after 24 hours
of container work in normal mode - to planning disk space
On 23.07.2015 3:45, Scott Dowdle wrote:
vzctl has a compact option that will basically take the free space
and give it back to the host. I've used compact a few times but I
don't use it regularly... so I'm not sure how efficient it is nor
how good it is at reclaiming 100% of the unused
On 23.07.2015 5:44, Kir Kolyshkin wrote:
1) currently even suspend/resume not work reliable:
https://bugzilla.openvz.org/show_bug.cgi?id=2470
- I can't suspend and resume containers without bugs.
and as result - I also can't use it for live migration.
Valid point, we need to figure it out.
On 22.07.2015 8:39, Kir Kolyshkin wrote:
1) currently even suspend/resume not work reliable:
https://bugzilla.openvz.org/show_bug.cgi?id=2470
- I can't suspend and resume containers without bugs.
and as result - I also can't use it for live migration.
Valid point, we need to figure it out.
On 22.07.2015 21:02, Scott Dowdle wrote:
Compare two situations:
1) Live migration not used at all
2) Live migration used and containers migrated between HN
In which situation possibility to obtain kernel panic is higher?
If you say possibility are equals this means
what OpenVZ live
On 22.07.2015 5:56, Scott Dowdle wrote:
I've read the recipes.
Some say you have to dedicate 1GB of RAM for every TB of storage.
dedicate 1GB of RAM for every TB of storage
need only if deduplication is turned on in ZFS.
But deduplication is not recommended to enable - it uses
lot of memory
On 22.07.2015 21:58, Scott Dowdle wrote:
ext4 over ploop over ext4 wasting disk space as overhead.
That is the case for all disk-file-as-disk-image containers and not
unique to ploop. You said if you can't use OpenVZ and ZFS together
(in the future maybe) then you'd switch to KVM... at
On 22.07.2015 3:17, Kir Kolyshkin wrote:
simfs need for using OpenVZ with ZFS
Other why not simfs considerations are listed at
http://openvz.org/Ploop/Why#Before_ploop
there are three levels:
1. before ploop: simfs over ext4
2. with ploop: ext4 over ploop over ext4
3. after ploop: simfs
On 22.07.2015 0:11, Kir Kolyshkin wrote:
The biggest problem with simfs appears to be security. We have recently
found a few bugs (not in simfs per se, but in the kernel in general,
i.e. these
are not our bugs for the most part) that can be exploited to escape
the simfs and let container access
On 19.06.2015 15:49, Scott Dowdle wrote:
While yum does have a download only option
(and there is yumdownloader in the yum-utils package)...
yum really shouldn't be overloaded to be a file sync tool.
yum is package manager, not file sync tool.
yum understands repository metadata about rpm
On 19.05.2015 4:50, Kir Kolyshkin wrote:
In CentOS 7 OpenVZ template also default target is not
multi-user and it should be manually switched via command line:
# systemctl set-default multi-user.target
But why default target in OpenVZ templates is not multi-user.target ?
Please file a bug.
On 19.05.2015 2:46, Kir Kolyshkin wrote:
Also you probably want to set multi-user as a default systemd target
(if it is not set that way already):
# Set default target as multi-user target
rm -f lib/systemd/system/default.target
ln -s multi-user.target lib/systemd/system/default.target
mkdir
On 13.05.2015 10:21, Pavel Odintsov wrote:
Docker is awesome toolkit.
From a security and composability perspective,
the Docker process model - where everything runs through a central
daemon - is fundamentally flawed. To “fix” Docker would essentially
mean a rewrite of the project, while
On 13.05.2015 2:09, Pavel Odintsov wrote:
Completely disagree with After hitting bug
https://bugzilla.openvz.org/show_bug.cgi?id=2470 I completely disable
suspending on stop for all hardware nodes, - VE_STOP_MODE=stop in
/etc/vz/vz.conf and don't use it at all.
Sorry, but I really set
Hello, All!
empty directory like /.cpt_hardlink_dir_a920e4ddc233afddc9fb53d26c392319
inside each container - this is bug or feature ?
if this is bug - it will be fixed in new releases?
if this is feature - how I can use it and how I can disable it?
--
Best regards,
Gena
On 13.05.2015 0:29, Kir Kolyshkin wrote:
empty directory like /.cpt_hardlink_dir_a920e4ddc233afddc9fb53d26c392319
inside each container - this is bug or feature ?
if this is bug - it will be fixed in new releases?
if this is feature - how I can use it and how I can disable it?
The answer
On 01.04.2015 2:51, Scott Dowdle wrote:
network: arping: Device venet0 not available.
It has already been fixed in the dev version of vzctl
but it hasn't been pushed to a release yet.
For more info, see: https://bugzilla.openvz.org/show_bug.cgi?id=3169
Scott, thank you. I use your patch
On 01.04.2015 6:20, Kir Kolyshkin wrote:
network: arping: Device venet0 not available.
A better workaround is to apply this patch:
http://git.openvz.org/?p=vzctl;a=commitdiff;h=24a0a40277542fba5b81
Kir, thank you.
But I guess, what CentOS 7 is wide used inside OpenVZ containers,
and many
Hello, All!
Is template centos-7-x86_64-minimal.tar.gz production ready or not?
Is exists known bugs or incompatibilities,
compare to variant of centos-6-x86_64-minimal.tar.gz template?
For example, incompatibilities between new software from this template
and old Linux kernel, used on
On 16.09.2014 19:35, Scott Dowdle wrote:
Is template centos-7-x86_64-minimal.tar.gz production ready or not?
Is exists known bugs or incompatibilities,
compare to variant of centos-6-x86_64-minimal.tar.gz template?
For example, incompatibilities between new software from this template
and old
41 matches
Mail list logo