Hello all,
Is it possible to use sa inside CT's?
When I do
# sa -im
inside CT I get
sa: ERROR -- print_stats_nicely called with num_calls == 0
But all seems to be ok if run on the node.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
(=BSD Process Accounting) can't be run in a OpenVZ container.
Bye,
Thorsten
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
?
How will it work, if --cpus is specified and less then physical cores?
How will it work, if --cpus is specified and CPU has hyper-threading and
1) --cpus less then cpu cores
2) or --cpus less and odd(!) (example: --cpus: 3, physical CPU cores: 4
+ HT)
--
Best Regards,
Nick Knutov
Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
Hello all,
couple years ago vzctl had option --noatime. But now there is no such
option:
# vzctl set ${ve} --noatime yes --save
non-option ARGV-elements: --save
# man vzctl | grep noatime
#
What happened with it? Did not find anything about it in google.
--
Best Regards,
Nick Knutov
http
incidents with ___FULL___ data loss from customers of
flashcache. Beware of it in production.
If you want speed you can try ZFS with l2arc/zvol cache because it's
native solution.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
performant than it is on the
OpenSolaris forks... so you can build your own if you can spare the people to
learn the best practices.
I don't have a use for ZFS myself so I'm not really advocating it.
TYL,
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23
zvols? I
have done some testing with root and private directly on a zfs file
system and so far everything seems to work just fine.
What am I to expect down the road?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
numbers should change.
Do you have really working zero downtime vzmigrate on ZFS?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo
I have old server with usual disks and new server with two ssd which are
smaller size. I have /vz on one disk and /vz2 on another.
I want to live migrate CTs from the old server to specified partition on
the new server but I can't find how to do it. Does anybody know?
--
Best Regards,
Nick
Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
, but this is also not a good way.
12.09.2014 5:33, Devon B. пишет:
On 9/11/2014 7:00 PM, Nick Knutov wrote:
I have old server with usual disks and new server with two ssd which are
smaller size. I have /vz on one disk and /vz2 on another.
I want to live migrate CTs from the old server
prior to migrating.
mkdir /vz2/private/VEID
ln -s /vz2/private/VEID /vz/private/VEID
Then try the migration, does it work?
On 9/11/2014 8:51 PM, Nick Knutov wrote:
I'm not good enough with such openvz internals and was hoped there is
ready solution. I found https://openvz.org
, Nick Knutov wrote:
I did exactly so.
Migration to symlink is working. And CT is running ok after. But
private/root paths are rewritten after migration to /vz + for simfs with
billions small files running CT from symlink can be slower.
Migration from symlink is also working. With the same
(values printed in
the error message are in sectors which are 512 bytes each).
Solution: please be reasonable when requesting diskinodes for ploop.
--
Best Regards,
Nick Knutov
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman
to 2621440
Either that, or just remove the DISKINODES from CT config
On 10/24/2014 8:05 PM, Nick Knutov wrote:
Thanks, now I understand why this occurred, but what is the easiest way
to convert a lot of different CTs to ploop? As I remember there is no
way to set up unlimited diskinodes or disable
over iSCSI...
..and there are still no speed tests.
12.11.2014 15:20, Pavel Odintsov пишет:
Any questions/suggestions/performance test and other feedback are
welcome here or on GitHub!
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
maybe problems, good
addition. I just added remark about quotes to comparison table.
On Wed, Nov 12, 2014 at 9:56 PM, Nick Knutov m...@knutov.com wrote:
Well, good beginning, but..
as we discussed earlier:
in most cases of hosting purposes users need quotes. And quotes work
only with ext4
Oh. I missed this.
13.11.2014 2:28, Devon B. пишет:
I don't think you can just run ploop over ZFS. Ploop requires ext4 as
the host filesystem according to bug 2277:
https://bugzilla.openvz.org/show_bug.cgi?id=2277
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7
`. Is it possbile?
(I know I can edit source, just want to check is it already implemented
while I can't find it)
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https
Bronnikov пишет:
we want find people who still use simfs for OpenVZ containers.
Do we have such users?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org
.
2015-10-28 12:22 GMT+03:00 Nick Knutov <m...@knutov.com
<mailto:m...@knutov.com>>:
Hello all,
I have CT with sshfs mounted. When I tried to migrate this CT I got:
Starting live migration of CT ... to ...
OpenVZ is running...
Checking for CPT version c
in clusters > 7 - 9 nodes
and wishes to share his or her experience, that's more than welcome.
Thanks,
Corrado
On 16/11/2015, at 4:44 AM, Nick Knutov wrote:
Unfortunately, pstorage has two major disadvantages:
1) it's not free
2) it not usable for m
acceleration, i.e. it is distributed and it offers file system corruption
prevention (background scrubbing).
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https
(and still tune2fs -O ^has_journal)
be fine?
Was that fixed bug already compiled and sent to yum repository (package
ploop I suppose) ?
07.10.2015 17:03, Dmitry Monakhov пишет:
> Sergey Bronnikov <serg...@openvz.org> writes:
>
>> Dima, could you help?
>>
>> On 02:08
?
Was that fixed bug already compiled and sent to yum repository (package
ploop I suppose) ?
07.10.2015 17:03, Dmitry Monakhov пишет:
> Sergey Bronnikov <serg...@openvz.org> writes:
>
>> Dima, could you help?
>>
>> On 02:08 Wed 30 Sep , Nick Knutov wrote:
>>
?
07.10.2015 21:05, Dmitry Monakhov пишет:
Nick Knutov <m...@knutov.com> writes:
yes, I'm using SSDs.
Partition was
tune2fs -O ^has_journal /dev/sdX
so I thought the journal was removed completely and data= section is not
important at all.
WOW.. This is hilarious. Indeed even w/o journal
.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
No. Even 2.x flashcashe is not possible to compile with recent openvz
rhel6 kernels.
13.11.2015 15:57, CoolCold пишет:
Bumping up - anyone still on flashcache & openvz kernels? Tried to
compile flashcache 3.1.3 dkms against 2.6.32-042stab112.15 , getting
errors:
--
Best Regards,
Is it possible to do live migration between physical disks inside one
physical node?
I suppose the answer is still no, so the question is what is possible to
do for this?
--
Best Regards,
Nick Knutov
Voice: +7-904-84-23-130
___
Users mailing list
Hello all,
what are the best/recommended mount options for ext4 on SSD disks for a
large amount of Ploop-only CTs?
--
Best Regards,
Nick Knutov
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo
zctl restore CTID" after you
changed the configuration file.
>
> On 09/08/2015 07:14 AM, Nick Knutov wrote:
>
> > Is it possible to do live migration between physical disks inside one
> > physical node?
>
> > I suppose the answer
to log in other ssh session and kill -9 it.
Kernel: 042stab108.8
Is it a bug or I'm doing something wrong?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https
I know ipset is not virtualized, but I have number of trusted CTs and I
want to use ipset inside them (and it's ok in my case to share all data
between CTs and node).
Is it possible to enable ipset for selected CTs?
--
Best Regards,
Nick Knutov
Hello,
is it possible now to limit CPU per user inside CT? I assume it should
be possible with cgroups but I don't know what exactly keywords should I
google.
kernel - latest openvz6
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
Hello all,
will PCI-e NVMe like Intel P3600 and P3608 work with OpenVZ 6 if it is
not boot drive?
or should I forget about NVMe untill Virtuozzo 7
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users
kill -9 I have empty folder with path /vz5/private/2016 (not
2016.tmp!)
dmesg | tail
ploop19205: unknown partition table
What can be wrong?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing
ok, OVZ-6680 created.
04.02.2016 13:16, Konstantin Khorenko пишет:
Hi Nick,
i haven't found a jira issue from you, have you filed it?
On 01/29/2016 05:04 AM, Nick Knutov wrote:
Yes, the question is about ploop of course.
How to get metadata of ploop image? `man what`?
If you are ok
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
st regards,
Konstantin Khorenko,
Virtuozzo Linux Kernel Team
On 01/28/2016 02:42 PM, Nick Knutov wrote:
Hello,
One of big reasons to prefer simfs over ploop is disk space overhead in
ploop after using snapshots (for backups for example).
It can be really huge - we have one CT which takes 120Gb inste
I think I saw it in the wiki but was unable to find now
How to create ploop CT with vzctl create using smaller ploop block size
then defaut 1MB ? Can I change it in some config file?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
As far as I understand - Virtuozzo 7 kernel DOES NOT contain latest NVMe
driver and RHEL 7 kernel has some speed problerms with NVMe.
Are there any official recomendations or suggestions from Openvz team?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
org/mailman/listinfo/users
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.or
Does OpenVZ affected by Dirty COW?
What is the best solution to fix it now?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
://www.spinics.net/lists/stable/msg147964.html
On 21.10.2016 19:39, Vasily Averin wrote:
yes
2.6.22+ are affected
here you can find an system tap script for mitigation:
https://bugzilla.redhat.com/show_bug.cgi?id=1384344#c13
On 21.10.2016 19:22, Nick Knutov wrote:
Does OpenVZ affected by Dirty
Hello all,
`top ` shows privvmpages as used memory with all latest Openvz 6 kernel,
instead of oomguarpages.
Is it possible to fix it?
I suppose it happened after COW bug was fixed
ps: vswap is used, of course.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https
47 matches
Mail list logo