Re: [Users] How OVZ community treats Vz7

2017-06-11 Thread Nick Knutov
On my taste support of VZ and feedback from VZ team is extremely bad, so 
we planning to migrate to Docker next year and LXC as soon as it will be 
usable at least like vz6.


For KVM there are a lot of alternatives, I do not know any reasons to 
use vz7 for this.



30.05.2017 13:46, Vasily Averin пишет:

Dear OpenVZ users,

could you please share your feedback on Vz7?

How do you perceive Virtuozzo VMs vs others (Oracle or KVM VMs) ?

How do you perceive Virtuozzo Containers vs others (Oracle containers, Docker 
containers, etc) ?
Thank you,
Vasily Averin
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] openvz6, top and free memory

2017-01-09 Thread Nick Knutov

Hello all,

`top ` shows privvmpages as used memory with all latest Openvz 6 kernel, 
instead of oomguarpages.


Is it possible to fix it?

I suppose it happened after COW bug was fixed

ps: vswap is used, of course.

--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Dirty COW

2016-10-27 Thread Nick Knutov
And it looks something is broken with memory, even with 120.5 - I see a 
lot of containers became with memory usage == memory limit in our 
monitoring graphs and `top` says free memory is 0 bytes now inside CT



25.10.2016 19:01, Dmitry Mishin пишет:

For those who missed an announce -
https://openvz.org/Download/kernel/rhel6/042stab120.3 is available since
22 Oct.

Thank you,
Dmitry.

On 22/10/16 16:07, "users-boun...@openvz.org on behalf of Scott Dowdle"
<users-boun...@openvz.org on behalf of dow...@montanalinux.org> wrote:


Greetings,

- Original Message -

According to the Red Hat bugzilla page
(https://bugzilla.redhat.com/show_bug.cgi?id=1384344#c13), they
claim that EL5 and EL6 are not vulnerable

No, they correctly claim the opposite.

Looking at that URL now (and remember what used to be there), there are a
number of comments that have been deleted... and in comment #33 they do
say:

"This issue affects the Linux kernel packages as shipped with Red Hat
Enterprise Linux 5 6,7 and MRG-2."

In some of the comments that were deleted they were claiming 5 and 6
weren't vulnerable but as you said, that was only a misunderstanding due
to some earlier PoC not working.

Thanks for the clarification.

TYL,
--
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Dirty COW

2016-10-21 Thread Nick Knutov


Are there plans to release new Openvz 6 kernels in repository soon?


21.10.2016 22:00, Vasily Averin пишет:

For paid Virtuozzo customers (if any read this)
  you can contact support for pre-released kernel.

Also all who affected can prepare fixed kernel by yourself
by using this patch:
http://www.spinics.net/lists/stable/msg147964.html

On 21.10.2016 19:39, Vasily Averin wrote:

yes
2.6.22+ are affected

here you can find an system tap script for mitigation:
https://bugzilla.redhat.com/show_bug.cgi?id=1384344#c13

On 21.10.2016 19:22, Nick Knutov wrote:

Does OpenVZ affected by Dirty COW?

What is the best solution to fix it now?



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Dirty COW

2016-10-21 Thread Nick Knutov

Does OpenVZ affected by Dirty COW?

What is the best solution to fix it now?


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] NVMe and Virtuozzo 7

2016-08-24 Thread Nick Knutov


As far as I understand - Virtuozzo 7 kernel DOES NOT contain latest NVMe 
driver and RHEL 7 kernel has some speed problerms with NVMe.


Are there any official recomendations or suggestions from Openvz team?

--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] PCI-e NVMe and OpenVZ 6

2016-06-20 Thread Nick Knutov

Hello all,

will PCI-e NVMe like Intel P3600 and P3608  work with OpenVZ 6 if it is 
not boot drive?


or should I forget about NVMe untill Virtuozzo 7

--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] limit cpu per user

2016-06-03 Thread Nick Knutov

Hello,

is it possible now to limit CPU per user inside CT? I assume it should 
be possible with cgroups but I don't know what exactly keywords should I 
google.


kernel - latest openvz6

--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] ploop block size

2016-04-15 Thread Nick Knutov

I think I saw it in the wiki but was unable to find now

How to create ploop CT with vzctl create using smaller ploop block size 
then defaut 1MB ? Can I change it in some config file?


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] simfs challenge

2016-02-07 Thread Nick Knutov

ok, OVZ-6680 created.


04.02.2016 13:16, Konstantin Khorenko пишет:

Hi Nick,

i haven't found a jira issue from you, have you filed it?

On 01/29/2016 05:04 AM, Nick Knutov wrote:

Yes, the question is about ploop of course.

How to get metadata of ploop image? `man what`?


If you are ok with a Container downtime, it's easy:

# e2image -r /dev/ploopXp1 - | bzip2 >/vz/image.e2i.bz2

There is an option to make this while CT is online, if you file a jira 
issue, i'll post details _there_.


--
Konstantin


28.01.2016 19:04, Konstantin Khorenko пишет:

Hi Nick,

i believe it's not the question about vzpbackup, but about
ploop compact which in your case works not that efficient.

Yes, we do work on improving ploop compact and would appreciate in case
you can provide us metadata of that "bad" ploop image.

Please, file an issue at bugs.openvz.org, i'll post instructions there.
(No real user data is needed, only fs metadata, which blocks are
filled which are not, etc)

Thank you.

--
Best regards,

Konstantin Khorenko,
Virtuozzo Linux Kernel Team

On 01/28/2016 02:42 PM, Nick Knutov wrote:

Hello,

One of big reasons to prefer simfs over ploop is disk space 
overhead in

ploop after using snapshots (for backups for example).
It can be really huge - we have one CT which takes 120Gb instead of 
60Gb

(df -h inside) after daily backups with vzpbackup (*) tool. It's
after(!) ploop merge & compact.

Do you plan to make some improvements for this case?

(*) https://github.com/andreasfaerber/vzpbackup

28.01.2016 14:27, Konstantin Khorenko пишет:

Hi All,

so, this is the current situation with simfs:

First of all we cannot implement as it was previously (in OpenVZ 6,
2.6.32-x kernels).
This is because there is no vzfs in Virtuozzo 7, but all quota 
related

code used by simfs
was shared between vzfs and simfs, and maintaining that code for 
simfs

only is definitely
not the thing we'd be happy to do.
Yes, we definitely want to reuse some mainstream (or write a
mainstream-able) code for it.

So, how this could be implemented?

"Old" simfs fs without quota - is just a bindmount => this is a start
point.
1st level quota for simfs-based Containers (the quota for the
Container as a whole) can be
implemented using project quota which is going to be accepted to
mainstream sooner or later.

As for the 2nd level quota (per-user quota inside a CT), we had not
found any good solution
during our internal discussions, so ideas from community are very
welcome.

So what do we have at the moment: you can create a simfs-based
Container in Virtuozzo 7
(see instructions below), but cannot manage quota for it.

What should be done further (that's what you can help us with):

1. Take project quota kernel patches (which Stas Kinsbursky alredy
ported to vz7 kernel
some time ago), apply them to current vz7 kernel - you'll get the
kernel able to manage
project quota.

2. Need to add project quota support to appropriate userspace tools:
quota-tools and e2fsprogs
see details at https://bugs.openvz.org/browse/OVZ-6619

Hope that helps to understand our plans on simfs in Virtuozzo 7
and looking forward for a hero who could drive this forward! :)



More formal feature description is below:

=== 




1. Feature
simfs filesystem for Virtuozzo 7 Containers


2. Description
https://bugs.openvz.org/browse/OVZ-6613
https://jira.sw.ru/browse/PSBM-40730

Differences between recommended Containers disk backend (ploop) and
simfs:
https://openvz.org/CT_storage_backends

Unlike previous versions of OpenVZ, simfs layout in Virtuozzo 7 is
based on bindmounts.
This means once you start a simfs-based Container, effectively
"private" area of a Container is bindmounted to the "root" Container
area.
That's it.

How to create a simfs-based Container:
* set VEFSTYPE=simfs in the /etc/vz/vz.conf
# vzctl create $VEID

3. Products
Virtuozzo 7, libvzctl-7.0.170


4. Testing
just a validation:
- create a Container
- start/stop the Container
- destroy the Container

5. Known issues
* quota for simfs-based Containers is not implemented
- 1st level quota (for the Container as a whole) is planned to be
implemented via project quota
  https://bugs.openvz.org/browse/OVZ-6619

- 2nd level quota (per-user quota inside a Container) is not 
planned


* online migration of a simfs-based Container is not implemented


6. Feature owner
Kernel part:Stanislav Kinsbursky <skinsbur...@virtuozzo.com>
Userspace part: Igor Sukhih <i...@virtuozzo.com>


--
Best regards,

Konstantin Khorenko,
Virtuozzo Linux Kernel Team
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] simfs challenge

2016-01-28 Thread Nick Knutov

Hello,

One of big reasons to prefer simfs over ploop is disk space overhead in 
ploop after using snapshots (for backups for example).
It can be really huge - we have one CT which takes 120Gb instead of 60Gb 
(df -h inside) after daily backups with vzpbackup (*) tool. It's 
after(!) ploop merge & compact.


Do you plan to make some improvements for this case?

(*) https://github.com/andreasfaerber/vzpbackup

28.01.2016 14:27, Konstantin Khorenko пишет:

Hi All,

so, this is the current situation with simfs:

First of all we cannot implement as it was previously (in OpenVZ 6, 
2.6.32-x kernels).
This is because there is no vzfs in Virtuozzo 7, but all quota related 
code used by simfs
was shared between vzfs and simfs, and maintaining that code for simfs 
only is definitely

not the thing we'd be happy to do.
Yes, we definitely want to reuse some mainstream (or write a 
mainstream-able) code for it.


So, how this could be implemented?

"Old" simfs fs without quota - is just a bindmount => this is a start 
point.
1st level quota for simfs-based Containers (the quota for the 
Container as a whole) can be
implemented using project quota which is going to be accepted to 
mainstream sooner or later.


As for the 2nd level quota (per-user quota inside a CT), we had not 
found any good solution
during our internal discussions, so ideas from community are very 
welcome.


So what do we have at the moment: you can create a simfs-based 
Container in Virtuozzo 7

(see instructions below), but cannot manage quota for it.

What should be done further (that's what you can help us with):

1. Take project quota kernel patches (which Stas Kinsbursky alredy 
ported to vz7 kernel
some time ago), apply them to current vz7 kernel - you'll get the 
kernel able to manage

project quota.

2. Need to add project quota support to appropriate userspace tools: 
quota-tools and e2fsprogs

see details at https://bugs.openvz.org/browse/OVZ-6619

Hope that helps to understand our plans on simfs in Virtuozzo 7
and looking forward for a hero who could drive this forward! :)



More formal feature description is below:

=== 


1. Feature
simfs filesystem for Virtuozzo 7 Containers


2. Description
https://bugs.openvz.org/browse/OVZ-6613
https://jira.sw.ru/browse/PSBM-40730

Differences between recommended Containers disk backend (ploop) and 
simfs:

https://openvz.org/CT_storage_backends

Unlike previous versions of OpenVZ, simfs layout in Virtuozzo 7 is 
based on bindmounts.

This means once you start a simfs-based Container, effectively
"private" area of a Container is bindmounted to the "root" Container 
area.

That's it.

How to create a simfs-based Container:
* set VEFSTYPE=simfs in the /etc/vz/vz.conf
# vzctl create $VEID

3. Products
Virtuozzo 7, libvzctl-7.0.170


4. Testing
just a validation:
- create a Container
- start/stop the Container
- destroy the Container

5. Known issues
* quota for simfs-based Containers is not implemented
  - 1st level quota (for the Container as a whole) is planned to be 
implemented via project quota

https://bugs.openvz.org/browse/OVZ-6619

  - 2nd level quota (per-user quota inside a Container) is not planned

* online migration of a simfs-based Container is not implemented


6. Feature owner
Kernel part:Stanislav Kinsbursky <skinsbur...@virtuozzo.com>
Userspace part: Igor Sukhih <i...@virtuozzo.com>


--
Best regards,

Konstantin Khorenko,
Virtuozzo Linux Kernel Team
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] simfs challenge

2016-01-28 Thread Nick Knutov

Yes, the question is about ploop of course.

How to get metadata of ploop image? `man what`?


28.01.2016 19:04, Konstantin Khorenko пишет:

Hi Nick,

i believe it's not the question about vzpbackup, but about
ploop compact which in your case works not that efficient.

Yes, we do work on improving ploop compact and would appreciate in case
you can provide us metadata of that "bad" ploop image.

Please, file an issue at bugs.openvz.org, i'll post instructions there.
(No real user data is needed, only fs metadata, which blocks are 
filled which are not, etc)


Thank you.

--
Best regards,

Konstantin Khorenko,
Virtuozzo Linux Kernel Team

On 01/28/2016 02:42 PM, Nick Knutov wrote:

Hello,

One of big reasons to prefer simfs over ploop is disk space overhead in
ploop after using snapshots (for backups for example).
It can be really huge - we have one CT which takes 120Gb instead of 60Gb
(df -h inside) after daily backups with vzpbackup (*) tool. It's
after(!) ploop merge & compact.

Do you plan to make some improvements for this case?

(*) https://github.com/andreasfaerber/vzpbackup

28.01.2016 14:27, Konstantin Khorenko пишет:

Hi All,

so, this is the current situation with simfs:

First of all we cannot implement as it was previously (in OpenVZ 6,
2.6.32-x kernels).
This is because there is no vzfs in Virtuozzo 7, but all quota related
code used by simfs
was shared between vzfs and simfs, and maintaining that code for simfs
only is definitely
not the thing we'd be happy to do.
Yes, we definitely want to reuse some mainstream (or write a
mainstream-able) code for it.

So, how this could be implemented?

"Old" simfs fs without quota - is just a bindmount => this is a start
point.
1st level quota for simfs-based Containers (the quota for the
Container as a whole) can be
implemented using project quota which is going to be accepted to
mainstream sooner or later.

As for the 2nd level quota (per-user quota inside a CT), we had not
found any good solution
during our internal discussions, so ideas from community are very
welcome.

So what do we have at the moment: you can create a simfs-based
Container in Virtuozzo 7
(see instructions below), but cannot manage quota for it.

What should be done further (that's what you can help us with):

1. Take project quota kernel patches (which Stas Kinsbursky alredy
ported to vz7 kernel
some time ago), apply them to current vz7 kernel - you'll get the
kernel able to manage
project quota.

2. Need to add project quota support to appropriate userspace tools:
quota-tools and e2fsprogs
see details at https://bugs.openvz.org/browse/OVZ-6619

Hope that helps to understand our plans on simfs in Virtuozzo 7
and looking forward for a hero who could drive this forward! :)



More formal feature description is below:

=== 



1. Feature
simfs filesystem for Virtuozzo 7 Containers


2. Description
https://bugs.openvz.org/browse/OVZ-6613
https://jira.sw.ru/browse/PSBM-40730

Differences between recommended Containers disk backend (ploop) and
simfs:
https://openvz.org/CT_storage_backends

Unlike previous versions of OpenVZ, simfs layout in Virtuozzo 7 is
based on bindmounts.
This means once you start a simfs-based Container, effectively
"private" area of a Container is bindmounted to the "root" Container
area.
That's it.

How to create a simfs-based Container:
* set VEFSTYPE=simfs in the /etc/vz/vz.conf
# vzctl create $VEID

3. Products
Virtuozzo 7, libvzctl-7.0.170


4. Testing
just a validation:
- create a Container
- start/stop the Container
- destroy the Container

5. Known issues
* quota for simfs-based Containers is not implemented
   - 1st level quota (for the Container as a whole) is planned to be
implemented via project quota
 https://bugs.openvz.org/browse/OVZ-6619

   - 2nd level quota (per-user quota inside a Container) is not planned

* online migration of a simfs-based Container is not implemented


6. Feature owner
Kernel part:Stanislav Kinsbursky <skinsbur...@virtuozzo.com>
Userspace part: Igor Sukhih <i...@virtuozzo.com>


--
Best regards,

Konstantin Khorenko,
Virtuozzo Linux Kernel Team
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users




--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] vzctl create with ploop not working

2016-01-19 Thread Nick Knutov

cat /etc/fstab | grep vz5 # it's SSD
UUID=... /vz5   ext4 
defaults,nofail,discard,relatime,errors=remount-ro,commit=30,data=ordered 1 
2


mount | grep vz5
/dev/sdc1 on /vz5 type ext4 
(rw,_netdev,relatime,discard,errors=remount-ro,commit=30,data=ordered)


2.6.32-042stab108.8

vzctl create ${ve} --ostemplate ${ostemplate} --layout ploop 
--private="$prefix/private/\$VEID/" --diskspace ${disk}

Creating image: /vz5/private/2016.tmp/root.hdd/root.hdd size=209715200K
Creating delta /vz5/private/2016.tmp/root.hdd/root.hdd bs=2048 
size=419430400 sectors v2

Storing /vz5/private/2016.tmp/root.hdd/DiskDescriptor.xml
Opening delta /vz5/private/2016.tmp/root.hdd/root.hdd
Adding delta dev=/dev/ploop19205 
img=/vz5/private/2016.tmp/root.hdd/root.hdd (rw)


and nothing more happens.
after ctrl+c I see

Destroying private area: /vz5/private/2016.tmp and nothing happens too. 
After kill -9 I have empty folder with path /vz5/private/2016 (not 
2016.tmp!)


dmesg | tail
ploop19205: unknown partition table

What can be wrong?

--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2015-11-16 Thread Nick Knutov

I'v heard this from large VPS hosting provider.

Anyway, even our intenal projects require more then 100MB/s in peak and 
more then 100 GB of storage (while only 100 GB are free). So local SSDs 
are cheaper for us, then 10G network and commercial version of pstorage.


16.11.2015 15:22, Corrado Fiore пишет:

Hi Nick,

could you elaborate more on the second point?  As far as I understood, pstorage 
is in fact targeted towards clusters with hundreds of containers, so I am a bit 
curious to understand where you got that information.

If there's anyone on the list that has used pstorage in clusters > 7 - 9 nodes 
and wishes to share his or her experience, that's more than welcome.

Thanks,
Corrado


On 16/11/2015, at 4:44 AM, Nick Knutov wrote:


Unfortunately, pstorage has two major disadvantages:

1) it's not free
2) it not usable for more then 1-4 CT over 1 gigabit network in real world 
cases (as far as I know)


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2015-11-15 Thread Nick Knutov

Unfortunately, pstorage has two major disadvantages:

1) it's not free
2) it not usable for more then 1-4 CT over 1 gigabit network in real 
world cases (as far as I know)


14.11.2015 16:12, Corrado Fiore пишет:

You might want to use Odin Cloud Storage (pstorage) instead, as it goes beyond 
SSD acceleration, i.e. it is distributed and it offers file system corruption 
prevention (background scrubbing).


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2015-11-13 Thread Nick Knutov


No. Even 2.x flashcashe is not possible to compile with recent openvz 
rhel6 kernels.



13.11.2015 15:57, CoolCold пишет:

Bumping up - anyone still on flashcache & openvz kernels? Tried to
compile flashcache 3.1.3 dkms against 2.6.32-042stab112.15 , getting
errors:


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] migrate fails when fuse mounted

2015-10-28 Thread Nick Knutov

Yes, this way works, but requires manual actions.

Bad in my case - I'm trying to migrate CT across some nodes for 
transparent load balancing.



28.10.2015 14:48, Сергей Мамонов пишет:

Hello.

Suspend containers not work correctly with fuse.
You can try umount sshfs and try migration again.

2015-10-28 12:22 GMT+03:00 Nick Knutov <m...@knutov.com 
<mailto:m...@knutov.com>>:


Hello all,

I have CT with sshfs mounted. When I tried to migrate this CT I got:

Starting live migration of CT ... to ...
OpenVZ is running...
 Checking for CPT version compatibility
 Checking for CPU flags compatibility
Error: Unsupported filesystem fuse
Insufficient CPU capabilities: can't migrate
Error: CPU capabilities check failed!
Error: Destination node CPU is not compatible
Error: Can't continue live migration

Should it be so? What is possible to do with this?

Thanks

-- 
Best Regards,

    Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130


___
Users mailing list
Users@openvz.org <mailto:Users@openvz.org>
https://lists.openvz.org/mailman/listinfo/users




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] recalculate quota

2015-10-08 Thread Nick Knutov
In my case destination host is usually the same, but ve_private is 
different. It seems there is no way to turn on second quota in that case.


May be hack with changing CT ID for the copy can help, but it must be 
changed back again and this can bring problems.


In more common form: I want to recalculate quotas in case some files in 
ve_private was removed from the node side.
In less common: I want to move CT from one ve_private to another with 
skipping some files. I want to do it like a live migration with near 
zero downtime.


08.10.2015 20:29, Kir Kolyshkin пишет:

Case from real life:

vzmigrate (or vzmove, which I plan to release soon) with exclude filter
for rsync to exclude hundreds gigabytes of cache files.


This case is different from what you asked about.

You can turn on quota on destination host before running rsync,
and as you copy the files quota is calculated.

Kir.


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ploop on ext4 without journal - bug?

2015-10-07 Thread Nick Knutov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
Thanks,

yes, I'm using SSDs.

Partition was
tune2fs -O ^has_journal /dev/sdX
so I thought the journal was removed completely and data= section is not
important at all.

Ok, what is the right way to fix it for me now?
Will
remount with data=ordered (and still tune2fs -O ^has_journal)
be fine?

Was that fixed bug already compiled and sent to yum repository (package
ploop I suppose) ?


07.10.2015 17:03, Dmitry Monakhov пишет:
> Sergey Bronnikov <serg...@openvz.org> writes:
>
>> Dima, could you help?
>>
>> On 02:08 Wed 30 Sep , Nick Knutov wrote:
>>> Hello all,
>>>
>>> I have an ext4 partition without journal (I need it so):
> First of all. The subject you mentioned is incorrect. This is not
> nojournal mode. Configuration you want to create is external journal
with data=journal.
>
> data=journal is full data journaling mode. Such mode assumes that it
> will pass through journal all data, but ploop directly issues bios to
> lower-fs(i.e. baypass journal). This done for performance reasons. That
> is why ploop is faster that any other solutions.
> All this means that full journaling for lower(/vz/private) fs is not
> compatible with ploop. So please do not use it, otherwise you'll get
> undefined behavior (most likely silent corruptions in guest-fs)
>
> The glitch you have mentioned most likely happen due to the fact that
> you use SSD. Recently we have found a bug in mm reclaim code which
> result in deadlock (swap on ssd in our case)
https://jira.sw.ru/browse/PSBM-39335
>
> Bug was fixed here:
> *diff-ms-mm-vmscan-do-not-wait-for-page-writeback-for-GFP_NOFS-allocations
> Added to 042stab112_3
>
> mm, vmscan: Do not wait for page writeback for GFP_NOFS
> Backport of mainline patch ecf5fc6e9654
>
>>>
>>> mount | grep vz2
>>> /dev/sde1 on /vz2 type ext4
(rw,relatime,discard,errors=remount-ro,commit=20,data=journal,journal_async_commit)
>>>
>>> debugfs -R features /dev/sde1
>>> debugfs 1.41.12 (17-May-2010)
>>> Filesystem features: ext_attr resize_inode dir_index filetype extent
flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
>>>
>>> When I'm trying to create CT with ploop layout - I've got
>>>
>>> Creating image: /vz2/private/2008.tmp/root.hdd/root.hdd size=10485760K
>>> Creating delta /vz2/private/2008.tmp/root.hdd/root.hdd bs=2048
size=20971520 sectors v2
>>> Storing /vz2/private/2008.tmp/root.hdd/DiskDescriptor.xml
>>> WARNING: /vz2 is mounted with data=writeback not recommended for
ploop; please use data=ordered instead
>>> Opening delta /vz2/private/2008.tmp/root.hdd/root.hdd
>>> Adding delta dev=/dev/ploop58376
img=/vz2/private/2008.tmp/root.hdd/root.hdd (rw)
>>>
>>> and now it freezes. (btw, vzctl says it's data=writeback, but it's
>>> data=journal and journal is removed - is it ok?)
>>>
>>>
>>> When ctrl+c I've got:
>>>
>>> ^C
>>> Cancelling...
>>> Cancelling...
>>> Destroying container private area: /vz2/private/2008.tmp
>>> ^C
>>> Cancelling...
>>> Cancelling...
>>>
>>> so I have to log in other ssh session and kill -9 it.
>>>
>>> Kernel: 042stab108.8
>>>
>>> Is it a bug or I'm doing something wrong?
>>>
>>> --
>>> Best Regards,
>>> Nick Knutov
>>> http://knutov.com
>>> ICQ: 272873706
>>> Voice: +7-904-84-23-130
>>>
>>
>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users

- -- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)
 
iQEcBAEBAgAGBQJWFT2CAAoJELne4KEUgITt1GMH/2Xys5rse+SK1+vH/NbP6Lbs
UbiLBMpti3btEKJh8UkUb3QTnTvHeSQT43m6o27jmG4ZuSG0m8Phf+DSlcl7FsCc
OuTU4rY6lFQOdsDibsFputyNf1cb0y7pKZoTQZMg/UWouVN8+n7y24FHnq7mWgQl
unwGhMq0fi/MGBjakZ3QRJ5NO5VchSLtKajVIBNXC40TCICL+0mxIU0IblcBJIXH
PvjB7w1bXsWRFXmm3poK5AZj880ULR0qw11gS9GBhCKOtiyFmKlMsMEknEPbbFS+
vd/ehyD/4DHoEju6KEQDfPt+XAbG8CxffgSqoMkvfit9eFC8GYTwan3xWTtdVMk=
=5DRi
-END PGP SIGNATURE-

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ploop on ext4 without journal - bug?

2015-10-07 Thread Nick Knutov
Thanks,

yes, I'm using SSDs.

Partition was
tune2fs -O ^has_journal /dev/sdX
so I thought the journal was removed completely and data= section is not
important at all.

Ok, what is the right way to fix it for me now?
Will
remount with data=ordered (and still tune2fs -O ^has_journal)
be fine?

Was that fixed bug already compiled and sent to yum repository (package
ploop I suppose) ?



07.10.2015 17:03, Dmitry Monakhov пишет:
> Sergey Bronnikov <serg...@openvz.org> writes:
>
>> Dima, could you help?
>>
>> On 02:08 Wed 30 Sep , Nick Knutov wrote:
>>> Hello all,
>>>
>>> I have an ext4 partition without journal (I need it so):
> First of all. The subject you mentioned is incorrect. This is not
> nojournal mode. Configuration you want to create is external journal with 
> data=journal.
>
> data=journal is full data journaling mode. Such mode assumes that it
> will pass through journal all data, but ploop directly issues bios to
> lower-fs(i.e. baypass journal). This done for performance reasons. That
> is why ploop is faster that any other solutions.
> All this means that full journaling for lower(/vz/private) fs is not
> compatible with ploop. So please do not use it, otherwise you'll get
> undefined behavior (most likely silent corruptions in guest-fs)
>
> The glitch you have mentioned most likely happen due to the fact that
> you use SSD. Recently we have found a bug in mm reclaim code which
> result in deadlock (swap on ssd in our case) 
> https://jira.sw.ru/browse/PSBM-39335
>
> Bug was fixed here:
> *diff-ms-mm-vmscan-do-not-wait-for-page-writeback-for-GFP_NOFS-allocations
> Added to 042stab112_3
>
> mm, vmscan: Do not wait for page writeback for GFP_NOFS
> Backport of mainline patch ecf5fc6e9654
>
>>> mount | grep vz2
>>> /dev/sde1 on /vz2 type ext4 
>>> (rw,relatime,discard,errors=remount-ro,commit=20,data=journal,journal_async_commit)
>>>
>>> debugfs -R features /dev/sde1
>>> debugfs 1.41.12 (17-May-2010)
>>> Filesystem features: ext_attr resize_inode dir_index filetype extent 
>>> flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
>>>
>>> When I'm trying to create CT with ploop layout - I've got
>>>
>>> Creating image: /vz2/private/2008.tmp/root.hdd/root.hdd size=10485760K
>>> Creating delta /vz2/private/2008.tmp/root.hdd/root.hdd bs=2048 
>>> size=20971520 sectors v2
>>> Storing /vz2/private/2008.tmp/root.hdd/DiskDescriptor.xml
>>> WARNING: /vz2 is mounted with data=writeback not recommended for ploop; 
>>> please use data=ordered instead
>>> Opening delta /vz2/private/2008.tmp/root.hdd/root.hdd
>>> Adding delta dev=/dev/ploop58376 
>>> img=/vz2/private/2008.tmp/root.hdd/root.hdd (rw)
>>>
>>> and now it freezes. (btw, vzctl says it's data=writeback, but it's
>>> data=journal and journal is removed - is it ok?)
>>>
>>>
>>> When ctrl+c I've got:
>>>
>>> ^C
>>> Cancelling...
>>> Cancelling...
>>> Destroying container private area: /vz2/private/2008.tmp
>>> ^C
>>> Cancelling...
>>> Cancelling...
>>>
>>> so I have to log in other ssh session and kill -9 it.
>>>
>>> Kernel: 042stab108.8
>>>
>>> Is it a bug or I'm doing something wrong?
>>>
>>> -- 
>>> Best Regards,
>>> Nick Knutov
>>> http://knutov.com
>>> ICQ: 272873706
>>> Voice: +7-904-84-23-130 
>>>
>>> ___
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130 

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ploop on ext4 without journal - bug?

2015-10-07 Thread Nick Knutov
Do I understand right - after all patches (in a little bit future) ploop 
should work fine on ext4 without journal with any data= mount option?


And if I have journal - I should use data=ordered to avoid silent bugs?

If ext4 has journal - there is known "feature" with mysql - mysql is 
much faster (near two times in my tests) if mount options are 
"data=journal,journal_async_commit" (and may be with O_DIRECT flag) 
compared to data=ordered. How to configure (if it's possible) such setup 
in case when mysql is inside CT with ploop on ext4 with journal and 
data=ordered?


07.10.2015 21:05, Dmitry Monakhov пишет:

Nick Knutov <m...@knutov.com> writes:


yes, I'm using SSDs.

Partition was
tune2fs -O ^has_journal /dev/sdX
so I thought the journal was removed completely and data= section is not
important at all.
WOW.. This is hilarious. Indeed even w/o journal ext4 show journal
related options in /proc/mounts. This is bug(minor, but still). I'll
prepare patch for mainstream.


Ok, what is the right way to fix it for me now?

Ok. If you want to run your host w/o journal than it is ok. We do not
test such configuration, but it does not contradict to any assumptions.

Will
remount with data=ordered (and still tune2fs -O ^has_journal)
be fine?

No you do not have to modify /etc/fstab

Was that fixed bug already compiled and sent to yum repository (package
ploop I suppose) ?

This was kernel's issue. Just update your kernel to most recent one
(042stab112_3 or higher)

#yum update vzkernel


07.10.2015 17:03, Dmitry Monakhov пишет:

Sergey Bronnikov <serg...@openvz.org> writes:


Dima, could you help?

On 02:08 Wed 30 Sep , Nick Knutov wrote:

Hello all,

I have an ext4 partition without journal (I need it so):

First of all. The subject you mentioned is incorrect. This is not
nojournal mode. Configuration you want to create is external journal

with data=journal.

data=journal is full data journaling mode. Such mode assumes that it
will pass through journal all data, but ploop directly issues bios to
lower-fs(i.e. baypass journal). This done for performance reasons. That
is why ploop is faster that any other solutions.
All this means that full journaling for lower(/vz/private) fs is not
compatible with ploop. So please do not use it, otherwise you'll get
undefined behavior (most likely silent corruptions in guest-fs)

The glitch you have mentioned most likely happen due to the fact that
you use SSD. Recently we have found a bug in mm reclaim code which
result in deadlock (swap on ssd in our case)

https://jira.sw.ru/browse/PSBM-39335

Bug was fixed here:
*diff-ms-mm-vmscan-do-not-wait-for-page-writeback-for-GFP_NOFS-allocations
Added to 042stab112_3

mm, vmscan: Do not wait for page writeback for GFP_NOFS
Backport of mainline patch ecf5fc6e9654


mount | grep vz2
/dev/sde1 on /vz2 type ext4

(rw,relatime,discard,errors=remount-ro,commit=20,data=journal,journal_async_commit)

debugfs -R features /dev/sde1
debugfs 1.41.12 (17-May-2010)
Filesystem features: ext_attr resize_inode dir_index filetype extent

flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

When I'm trying to create CT with ploop layout - I've got

Creating image: /vz2/private/2008.tmp/root.hdd/root.hdd size=10485760K
Creating delta /vz2/private/2008.tmp/root.hdd/root.hdd bs=2048

size=20971520 sectors v2

Storing /vz2/private/2008.tmp/root.hdd/DiskDescriptor.xml
WARNING: /vz2 is mounted with data=writeback not recommended for

ploop; please use data=ordered instead

Opening delta /vz2/private/2008.tmp/root.hdd/root.hdd
Adding delta dev=/dev/ploop58376

img=/vz2/private/2008.tmp/root.hdd/root.hdd (rw)

and now it freezes. (btw, vzctl says it's data=writeback, but it's
data=journal and journal is removed - is it ok?)


When ctrl+c I've got:

^C
Cancelling...
Cancelling...
Destroying container private area: /vz2/private/2008.tmp
^C
Cancelling...
Cancelling...

so I have to log in other ssh session and kill -9 it.

Kernel: 042stab108.8

Is it a bug or I'm doing something wrong?

--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
- -- 
Best Regards,

Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)
  
iQEcBAEBAgAGBQJWFT2CAAoJELne4KEUgITt1GMH/2Xys5rse+SK1+vH/NbP6Lbs

UbiLBMpti3btEKJh8UkUb3QTnTvHeSQT43m6o27jmG4ZuSG0m8Phf+DSlcl7FsCc
OuTU4rY6lFQOdsDibsFputyNf1cb0y7pKZoTQZMg/UWouVN8+n7y24FHnq7mWgQl
unwGhMq0fi/MGBjakZ3QRJ5NO5VchSLtKajVIBNXC40TCICL+0mxIU0IblcBJIXH
PvjB7w1bXsWRFXmm3poK5AZj880ULR0qw11gS9GBhCKOtiyFmKlMsMEknEPbbFS+
vd/ehyD/4DHoEju6KEQDfPt+XAbG8CxffgSqoMkvfit9eFC8GYTwan3xWTtdVMk=
=5DRi
-END PGP SIGNATURE-


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130


[Users] ipset and openvz

2015-09-30 Thread Nick Knutov
I know ipset is not virtualized, but I have number of trusted CTs and I
want to use ipset inside them (and it's ok in my case to share all data
between CTs and node).

Is it possible to enable ipset for selected CTs?

-- 
Best Regards,
Nick Knutov

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] ploop on ext4 without journal - bug?

2015-09-29 Thread Nick Knutov
Hello all,

I have an ext4 partition without journal (I need it so):

mount | grep vz2
/dev/sde1 on /vz2 type ext4 
(rw,relatime,discard,errors=remount-ro,commit=20,data=journal,journal_async_commit)

debugfs -R features /dev/sde1
debugfs 1.41.12 (17-May-2010)
Filesystem features: ext_attr resize_inode dir_index filetype extent flex_bg 
sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

When I'm trying to create CT with ploop layout - I've got

Creating image: /vz2/private/2008.tmp/root.hdd/root.hdd size=10485760K
Creating delta /vz2/private/2008.tmp/root.hdd/root.hdd bs=2048 size=20971520 
sectors v2
Storing /vz2/private/2008.tmp/root.hdd/DiskDescriptor.xml
WARNING: /vz2 is mounted with data=writeback not recommended for ploop; please 
use data=ordered instead
Opening delta /vz2/private/2008.tmp/root.hdd/root.hdd
Adding delta dev=/dev/ploop58376 img=/vz2/private/2008.tmp/root.hdd/root.hdd 
(rw)

and now it freezes. (btw, vzctl says it's data=writeback, but it's
data=journal and journal is removed - is it ok?)


When ctrl+c I've got:

^C
Cancelling...
Cancelling...
Destroying container private area: /vz2/private/2008.tmp
^C
Cancelling...
Cancelling...

so I have to log in other ssh session and kill -9 it.

Kernel: 042stab108.8

Is it a bug or I'm doing something wrong?

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130 

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] live migration inside one physical node

2015-09-08 Thread Nick Knutov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
This way takes time. It's definitely not _live_ migration (


08.09.2015 14:00, Kevin Holly [Fusl] ?:
> Hi!
>
> You can just do "vzctl suspend CTID", move the container to another
place and then restore it there with "vzctl restore CTID" after you
changed the configuration file.
>
> On 09/08/2015 07:14 AM, Nick Knutov wrote:
>
> > Is it possible to do live migration between physical disks inside one
> > physical node?
>
> > I suppose the answer is still no, so the question is what is possible to
> > do for this?
>
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users

- -- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)
 
iQEcBAEBAgAGBQJV7t7RAAoJELne4KEUgITtOcEH/0R1ACFQgJp6rcJ+2l17JvUp
OdvzmBhRoWiNSHRvxlQTtGatfKY3lCFiFUlDlNu+mdkfQ5HKszuG4LM5n4zQ5brj
MKtbGy7nQX0+9e/lujtLuHPF0tjLgTgevlWibJncvfDnvErvy2cvNyuVoztH9wS1
vXcfexBhRR5pGkJTSNUqBPe/mfN0AkzkmOXGyuAfRPc2r6tx7AgMV90mPyHSaA7s
04ouKfDATOG/ReUbxILabCVttAMlyj1tZvQSOU7S9MrXQ/R5PqN8HS60AcJ6wlFs
Jzij4B53jUnyNrFMX4P8uSbF4rlxF+g2qwv4bdPRRUedgosm2PUvUs1UUVBRgHU=
=xNm3
-END PGP SIGNATURE-

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] live migration inside one physical node

2015-09-07 Thread Nick Knutov

Is it possible to do live migration between physical disks inside one
physical node?

I suppose the answer is still no, so the question is what is possible to
do for this?

-- 
Best Regards,
Nick Knutov
Voice: +7-904-84-23-130 

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] ext4 mount options for Ploop on SSD

2015-09-07 Thread Nick Knutov
Hello all,

what are the best/recommended mount options for ext4 on SSD disks for a
large amount of Ploop-only CTs?

-- 
Best Regards,
Nick Knutov
Voice: +7-904-84-23-130 

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] SIMFS users

2015-07-23 Thread Nick Knutov

Yes. There are number of cases when use of ploop is bad idea. Large part
of them is already mentioned in list.

For example I can use ploop only for small CTs (less then 5-10 GB) as
ploop disadvantages like image size (even compacted) is not critical in
this case.


21.07.2015 12:21, Sergey Bronnikov пишет:
 we want find people who still use simfs for OpenVZ containers.
 Do we have such users?

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130 



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] target VE_PRIVATE for vzmigrate

2015-05-07 Thread Nick Knutov
Hello all,

I see it's possible now to use selected target VE_PRIVATE for vzmigrate
via changing /etc/vz/vz.conf on destination node -
https://bugzilla.openvz.org/show_bug.cgi?id=2523 (and it works - I checked)

But I'd like to specify destination VE_PRIVATE as a parameter to
`vzmigrate`. Is it possbile?
(I know I can edit source, just want to check is it already implemented
while I can't find it)

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130 



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Shortest guide about running OpenVZ containers on top of ZFS

2014-11-12 Thread Nick Knutov
Well, good beginning, but..

as we discussed earlier:

in most cases of hosting purposes users need quotes. And quotes work
only with ext4. So the only real possible case of usage is ploop over
zfs and the only good reason to have zfs here is l2arc cache on ssd or
large amount SSD disks in raidz3 over iSCSI...

..and there are still no speed tests.



12.11.2014 15:20, Pavel Odintsov пишет:
 Any questions/suggestions/performance test and other feedback are
 welcome here or on GitHub!

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Shortest guide about running OpenVZ containers on top of ZFS

2014-11-12 Thread Nick Knutov
When you need quotes and there is only one way to get them...

I don't think ploop is about to solve ext4 troubles. I's just solve some
troubles (which are common to a lot of file systems).

ZFS in this case is more alternative to Parallels Cloud Storage which is
closed source and hard to get even for money (I contacted Parallels
sales several times and never got the pricelist from them).

Also, ZFS is good in case of NAS with large amount of SSDs or usual
disks with l2arc cache on SSD. And you can use ploop over ZFS in this
case. I suppose ploop over glusterfs (for example) and most of others
file system with any redundancy (I mean any realization of raid idea)
will be more pain then usable solution, for comparison.


13.11.2014 0:25, Pavel Odintsov пишет:
 Hello, Nick!
 
 Ploop is really useless for ZFS because it solves ext4 troubles and
 ZFS haven't this issues by design. Quotes maybe problems, good
 addition. I just added remark about quotes to comparison table.
 
 On Wed, Nov 12, 2014 at 9:56 PM, Nick Knutov m...@knutov.com wrote:
 Well, good beginning, but..

 as we discussed earlier:

 in most cases of hosting purposes users need quotes. And quotes work
 only with ext4. So the only real possible case of usage is ploop over
 zfs and the only good reason to have zfs here is l2arc cache on ssd or
 large amount SSD disks in raidz3 over iSCSI...

 ..and there are still no speed tests.



 12.11.2014 15:20, Pavel Odintsov пишет:
 Any questions/suggestions/performance test and other feedback are
 welcome here or on GitHub!

 --
 Best Regards,
 Nick Knutov
 http://knutov.com
 ICQ: 272873706
 Voice: +7-904-84-23-130
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 
 
 

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Shortest guide about running OpenVZ containers on top of ZFS

2014-11-12 Thread Nick Knutov

Oh. I missed this.


13.11.2014 2:28, Devon B. пишет:
 I don't think you can just run ploop over ZFS.   Ploop requires ext4 as
 the host filesystem according to bug 2277:
 https://bugzilla.openvz.org/show_bug.cgi?id=2277

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] convert to ploop

2014-11-10 Thread Nick Knutov
Thanks, this is working.

For wiki - I did

find /etc/vz/conf/*.conf -type f -exec sed -i.bak
s/DISKINODES/#DISKINODES/g {} \;

VE=123 ; vzctl stop $VE ; vzctl convert $VE ; vzctl start $VE

Unfortunatly, I do not understand how to work with/edit MediaWiki and
I'm not sure on what page it's better to add this.

25.10.2014 7:32, Kirill Kolyshkin пишет:
 
 On Oct 24, 2014 5:33 PM, Devon B. devo...@virtualcomplete.com
 mailto:devo...@virtualcomplete.com wrote:

 I think what Kir was getting at was set the diskinodes equal to 65536
 x GiB before converting.  So for 40GiB, set diskinodes to 2621440
 
 Either that, or just remove the DISKINODES from CT config
 
 On 10/24/2014 8:05 PM, Nick Knutov wrote:

 Thanks, now I understand why this occurred, but what is the easiest way
 to convert a lot of different CTs to ploop? As I remember there is no
 way to set up unlimited diskinodes or disable them (in case I want to
 use CT size when converting to ploop and don't want to think about
 inodes at all).


 25.10.2014 5:31, Kir Kolyshkin пишет:

 [...]
 Previously, we didn't support setting diskinodes for ploop, but later we
 found
 a way to implement it (NOTE: for vzctl create and vzctl convert only).
 The trick we use it we create a file system big enough to accomodate the
 requested number of inodes, and then use ploop resize (in this case
 downsize)
 to bring it down to requested amount.

 In this case, 1G inodes requirements leads to creation of 16TB
 filesystem
 (remember, 1 inode per 16K). Unfortunately, such huge FS can't be
 downsized
 to as low as 40G, the minimum seems to be around 240G (values printed in
 the error message are in sectors which are 512 bytes each).

 Solution: please be reasonable when requesting diskinodes for ploop.



 ___
 Users mailing list
 Users@openvz.org mailto:Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] convert to ploop

2014-10-24 Thread Nick Knutov
Thanks, now I understand why this occurred, but what is the easiest way
to convert a lot of different CTs to ploop? As I remember there is no
way to set up unlimited diskinodes or disable them (in case I want to
use CT size when converting to ploop and don't want to think about
inodes at all).


25.10.2014 5:31, Kir Kolyshkin пишет:
 [...]
 Previously, we didn't support setting diskinodes for ploop, but later we
 found
 a way to implement it (NOTE: for vzctl create and vzctl convert only).
 The trick we use it we create a file system big enough to accomodate the
 requested number of inodes, and then use ploop resize (in this case
 downsize)
 to bring it down to requested amount.
 
 In this case, 1G inodes requirements leads to creation of 16TB filesystem
 (remember, 1 inode per 16K). Unfortunately, such huge FS can't be downsized
 to as low as 40G, the minimum seems to be around 240G (values printed in
 the error message are in sectors which are 512 bytes each).
 
 Solution: please be reasonable when requesting diskinodes for ploop.


-- 
Best Regards,
Nick Knutov
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] vzmigrate to /vz2 instead of /vz

2014-09-11 Thread Nick Knutov
I have old server with usual disks and new server with two ssd which are
smaller size. I have /vz on one disk and /vz2 on another.

I want to live migrate CTs from the old server to specified partition on
the new server but I can't find how to do it. Does anybody know?

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] backup ploop

2014-09-11 Thread Nick Knutov
How to backup ploop CT's using and not using snapshots if

1) I do not need the state of running processes, only files
2) But I have mysql on some CTs and some files can be broken if they are
not synced/closed
3) Backup must be without downtime, I can't stop CT  make backup  run CT

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vzmigrate to /vz2 instead of /vz

2014-09-11 Thread Nick Knutov
I'm not good enough with such openvz internals and was hoped there is
ready solution. I found https://openvz.org/Vzmigrate_filesystem_aware
but it is for older version of vzmigrate.

Yes, I tried symlink and

1) /vz2 - /vz2 as symlink on /vz and back
and I have changed private/root paths in CT conf after
vzmigrate+vzmigrate back and files was not removed after second
vzmigrate (from node where symlink was).

2) /vz - /vz2
looks ok but I have to change pathes in CT config after so CT should be
restarted with downtime.

So, all this does not look good. May be it can be better with mount
--bind, but this is also not a good way.



12.09.2014 5:33, Devon B. пишет:
 On 9/11/2014 7:00 PM, Nick Knutov wrote:
 I have old server with usual disks and new server with two ssd which are
 smaller size. I have /vz on one disk and /vz2 on another.

 I want to live migrate CTs from the old server to specified partition on
 the new server but I can't find how to do it. Does anybody know?

 You could get dirty and do it manually with ploop send and
 checkpointing.  However, have you tried just using a symlink from
 /vz/private/VEID to /vz2/private/VEID?


-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vzmigrate to /vz2 instead of /vz

2014-09-11 Thread Nick Knutov
I did exactly so.

Migration to symlink is working. And CT is running ok after. But
private/root paths are rewritten after migration to /vz + for simfs with
billions small files running CT from symlink can be slower.

Migration from symlink is also working. With the same issues plus source
folder with CT is not deleted after migration, only symlink.


12.09.2014 7:16, Devon B. пишет:
 Like I said though, use a symlink per VE, not the entire vz2/vz
 directory.  Then you won't have to change anything in the config.  Just
 create a symlink for the virtual servers you want on the second SSD
 prior to migrating.
 
 mkdir /vz2/private/VEID
 ln -s /vz2/private/VEID /vz/private/VEID
 
 Then try the migration, does it work?
 
 On 9/11/2014 8:51 PM, Nick Knutov wrote:
 I'm not good enough with such openvz internals and was hoped there is
 ready solution. I found https://openvz.org/Vzmigrate_filesystem_aware
 but it is for older version of vzmigrate.

 Yes, I tried symlink and

 1) /vz2 - /vz2 as symlink on /vz and back
 and I have changed private/root paths in CT conf after
 vzmigrate+vzmigrate back and files was not removed after second
 vzmigrate (from node where symlink was).

 2) /vz - /vz2
 looks ok but I have to change pathes in CT config after so CT should be
 restarted with downtime.

 So, all this does not look good. May be it can be better with mount
 --bind, but this is also not a good way.



 12.09.2014 5:33, Devon B. пишет:
 On 9/11/2014 7:00 PM, Nick Knutov wrote:
 I have old server with usual disks and new server with two ssd which
 are
 smaller size. I have /vz on one disk and /vz2 on another.

 I want to live migrate CTs from the old server to specified
 partition on
 the new server but I can't find how to do it. Does anybody know?

 You could get dirty and do it manually with ploop send and
 checkpointing.  However, have you tried just using a symlink from
 /vz/private/VEID to /vz2/private/VEID?

 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vzmigrate to /vz2 instead of /vz

2014-09-11 Thread Nick Knutov
There are cases when simfs is better solution. We have such cases. Any
way, the other problems remain.


12.09.2014 8:56, Devon B. пишет:
 Shouldn't be millions of small files with ploop.  Should just be the
 one: root.hdd.  Where it is mounted shouldn't matter (VE_ROOT).
 
 On 9/11/2014 9:57 PM, Nick Knutov wrote:
 I did exactly so.

 Migration to symlink is working. And CT is running ok after. But
 private/root paths are rewritten after migration to /vz + for simfs with
 billions small files running CT from symlink can be slower.

 Migration from symlink is also working. With the same issues plus source
 folder with CT is not deleted after migration, only symlink.


 12.09.2014 7:16, Devon B. пишет:
 Like I said though, use a symlink per VE, not the entire vz2/vz
 directory.  Then you won't have to change anything in the config.  Just
 create a symlink for the virtual servers you want on the second SSD
 prior to migrating.

 mkdir /vz2/private/VEID
 ln -s /vz2/private/VEID /vz/private/VEID

 Then try the migration, does it work?

 On 9/11/2014 8:51 PM, Nick Knutov wrote:
 I'm not good enough with such openvz internals and was hoped there is
 ready solution. I found https://openvz.org/Vzmigrate_filesystem_aware
 but it is for older version of vzmigrate.

 Yes, I tried symlink and

 1) /vz2 - /vz2 as symlink on /vz and back
 and I have changed private/root paths in CT conf after
 vzmigrate+vzmigrate back and files was not removed after second
 vzmigrate (from node where symlink was).

 2) /vz - /vz2
 looks ok but I have to change pathes in CT config after so CT should be
 restarted with downtime.

 So, all this does not look good. May be it can be better with mount
 --bind, but this is also not a good way.



 12.09.2014 5:33, Devon B. пишет:
 On 9/11/2014 7:00 PM, Nick Knutov wrote:
 I have old server with usual disks and new server with two ssd which
 are
 smaller size. I have /vz on one disk and /vz2 on another.

 I want to live migrate CTs from the old server to specified
 partition on
 the new server but I can't find how to do it. Does anybody know?

 You could get dirty and do it manually with ploop send and
 checkpointing.  However, have you tried just using a symlink from
 /vz/private/VEID to /vz2/private/VEID?
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-10 Thread Nick Knutov
There are two important moments here:

1) As Pavel wrote IO can't be separated easily with one fs (now, but I
think it can change with cgroups in future)

2) per-user quota inside CT is supported for ext4 only now

This two moments can be important for you or not.

In our real life we never had any issues with IO in production (and we
are migrating to SSD, so IO is always enough now), but most of
our/customers CTs are shared hosting in someway, so having per-user
quota is critical.


10.07.2014 14:42, Aleksandar Ivanisevic пишет:
 Why is everyone insisting on ext4 and even ext4 in individual zvols? I
 have done some testing with root and private directly on a zfs file
 system and so far everything seems to work just fine.
 
 What am I to expect down the road?

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-10 Thread Nick Knutov
I think you are speaking here about different cases.

One is making HA backup node. When we are backing up full node to
another node (1:1) - zfs send/receive is much better (and the goal is to
save data, not running processes). Without zfs - ploop snapshotting and
vzmigrate is good enough (over SSD), and rsync with ext4 (simfs inside
CT) is really pain.

The other case is migrating large amount of CTs over large amount of
nodes for resource usage balancing [with zero downtime]. There is no
alternatives to vzmigrate here although zfs send/receive with
per-container ZVOL can speed up this process [if it's important to
transfer between nodes faster with less network usage]

10.07.2014 15:35, Pavel Odintsov пишет:
 Why? ZFS send/receive is able to do bit-by-bit identical copy of the FS,
 I thought the point of migration is to don't have the CT notice any
 change, I don't see why the inode numbers should change.
 Do you have really working zero downtime vzmigrate on ZFS?
 

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-08 Thread Nick Knutov
We are using this for only cashing reads (mode thru), not writes.


(offtopic) We can not use ZFS. Unfortunately, NAS with something like
Nexenta is to expensive for us.

Anyway, we are doing completely migrate to SSD. It's just cheaper.


08.07.2014 22:23, Pavel Odintsov пишет:
 I knew about few incidents with ___FULL___ data loss from customers of
 flashcache. Beware of it in production.
 
 If you want speed you can try ZFS with l2arc/zvol cache because it's
 native solution.

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-08 Thread Nick Knutov
I know about this project, but what about stability/compatibility ZFS on
Linux with OpenVZ kernel? Has anyone ever tested it?

Also, with ext4 I can always at any [our] datacenter boot to rescue mode
and, for example, move data to another/new server. I have no idea how to
get ZFS data if something happen wrong with hardware or recently
installed kernel with usual

From the other side, we are using flashcache in production for about two
years. With zero problems during all this time. It is not as fast as
Bcache (which is not compatible with OpenVZ  I think), but it solves
problem well.


08.07.2014 23:52, Scott Dowdle пишет:
 Greetings,
 
 - Original Message -
 (offtopic) We can not use ZFS. Unfortunately, NAS with something like
 Nexenta is to expensive for us.
 
 From what I've gathered from a few presentations, ZFS on Linux 
 (http://zfsonlinux.org/) is as stable but more performant than it is on the 
 OpenSolaris forks... so you can build your own if you can spare the people to 
 learn the best practices.
 
 I don't have a use for ZFS myself so I'm not really advocating it.
 
 TYL,
 

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] vzctl --noatime

2013-05-13 Thread Nick Knutov
Hello all,

couple years ago vzctl had option --noatime. But now there is no such
option:

# vzctl set ${ve} --noatime yes --save
non-option ARGV-elements: --save

# man vzctl | grep noatime
#

What happened with it? Did not find anything about it in google.

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] AES-NI inside CT

2013-05-11 Thread Nick Knutov
Hello,

on the node # openssl engine -t
(aesni) Intel AES-NI engine
 [ available ]

# vzctl enter 123
inside CT # openssl engine -t
(dynamic) Dynamic engine loading support
 [ unavailable ]

Does AES-NI available inside CT? Should I add some capabilities to CT or
something else?

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] nginx, inside openvz CT, worker_cpu_affinity

2012-07-28 Thread Nick Knutov

Hello all,


 http://nginx.org/en/docs/ngx_core_module.html#worker_cpu_affinity

 worker_processes4;
 worker_cpu_affinity 0001 0010 0100 1000;

 Binds worker processes to the sets of CPUs.


Does it make sense inside OpenVZ container?

Will it work if --cpus does not specified in CT config?
How will it work, if --cpus is specified and less then physical cores?
How will it work, if --cpus is specified and CPU has hyper-threading and
1) --cpus less then cpu cores
2) or --cpus less and odd(!) (example: --cpus: 3, physical CPU cores: 4 
+ HT)


--
Best Regards,
Nick Knutov
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Is it possible to use sa inside CT's?

2009-11-09 Thread Nick Knutov


Yes,

# accton on
Turning on process accounting, file set to the default 
'/var/log/account/pacct'.

accton: Operation not permitted

Is it possible to write patch to support this or it's impossible at all 
because of some limitation of OpenVZ?


Thorsten Schifferdecker пишет:

Hi Nick,

Nick Knutov schrieb/wrote:

Hello all,

Is it possible to use sa inside CT's?

When I do

# sa -im

inside CT I get

sa: ERROR -- print_stats_nicely called with num_calls == 0

But all seems to be ok if run on the node.


accton is not running and has not logged any entries in the logfiles.

Afaik accton (=BSD Process Accounting) can't be run in a OpenVZ container.

Bye,
Thorsten
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users



--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


[Users] Is it possible to use sa inside CT's?

2009-11-08 Thread Nick Knutov

Hello all,

Is it possible to use sa inside CT's?

When I do

# sa -im

inside CT I get

sa: ERROR -- print_stats_nicely called with num_calls == 0

But all seems to be ok if run on the node.

--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users