Re: [Users] Upstream kernel and vzctl updated documentation

2015-02-25 Thread Devon B.

Scott,

Thanks for the response.  I had done some additional testing over the 
past few days.   Checkpointing has been implemented but it doesn't seem 
to be bug-free (dump worked, but couldn't get it to restore).  Vzctl 
does seem to be working better than LXC userspace tools out of box for 
unprivileged containers, but obviously lacks on configuration options.



Scott Dowdle mailto:dow...@montanalinux.org
Wednesday, February 25, 2015 11:33 AM
Greetings,

- Original Message -

No. I don't think much additional effort has been put into making 
vzctl more compatible with upstream kernels. I could be wrong and 
would be happy to be. :)


We are expecting an EL7-based OpenVZ kernel branch to drop in the not 
too distant future and if Kir's presentation description for LinuxFest 
Northwest at the end of April is any indicator... running Docker 
within an OpenVZ container as well as running Docker under an OpenVZ 
kernel... will be things we learn about as well. I think some of that 
will apply to the EL6-based OpenVZ kernel as well but I'm not certain.


TYL,
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Upstream kernel and vzctl updated documentation

2015-02-21 Thread Devon B.
Is there updated documentation on vzctl with the upstream kernel?

If not, maybe we can add some to the wiki.

1. ) How does LOCAL_UID work?  Does it just add n to the container's
uid?  Is there a limit?  For example, with LOCAL_UID=10, UID 0
(root) becomes 10, but what about UID 10, does it become
20?  What about 100?

2.)  How to define an apparmor profile or selinux context for the container?

3.)  Has checkpointing support with CRIU advanced?

4.)  Has a list function be built in to replace the use of vzlist?

5.)  Has any implementation like fuse-procfs or lxcfs been tried for a
per-container meminfo/cpuinfo/stat?

6.)  How is --cpus implemented?  cpuset.cpus only accepts static values
as far as I know, for instance 0-1 will assign CPU 0 and CPU 1 to a
container compared to OpenVZ's dynamic vcpu allocation. How does vzctl
handle allocation?

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] simfs with ploop?

2015-01-21 Thread Devon B.
I can't speak as to how to address the issue, but why do you consider it
messed up?  I logged in to a few nodes and saw the same thing on all of
them and I don't remember this being any different in the past.  The
ploop device only exists outside of the container (when mounted). 
Inside the container is just a reference, no actual device exists.

I don't know enough about the original issue, what are you trying to
accomplish with the ploop device inside the container?


 Rene C. mailto:ope...@dokbua.com
 Wednesday, January 21, 2015 6:47 AM
 I've gone through all containers and actually some of them work
 correctly, only some are messed up like this.

 Take for example this one:

 [root@server22 ~]# vzctl restart 2201
 Restarting container
 Stopping container ...
 Container was stopped
 Unmounting file system at /vz/root/2201
 Unmounting device /dev/ploop27939
 Container is unmounted
 Starting container...
 Opening delta /vz/private/2201/root.hdd/root.hdd
 Adding delta dev=/dev/ploop27939
 img=/vz/private/2201/root.hdd/root.hdd (ro)
 Adding delta dev=/dev/ploop27939
 img=/vz/private/2201/root.hdd/root.hdd.{7a09b730-f2d6-4b21-b856-0bd6ca420a6e}
 (rw)
 Mounting /dev/ploop27939p1 at /vz/root/2201 fstype=ext4
 data='balloon_ino=12,'
 Container is mounted
 Adding IP address(es): (redacted)
 Setting CPU limit: 400
 Setting CPU units: 50
 Setting CPUs: 4
 Container start in progress...

 So apparently the ploop device should be /dev/ploop/27939. Everything
 seems to work, inside the container this device is referred by
 /proc/mounts

 [root@server22 ~]# vzctl exec 2201 cat /proc/mounts
 /dev/ploop27939p1 / ext4
 rw,relatime,barrier=1,data=ordered,balloon_ino=12 0 0
 proc /proc proc rw,relatime 0 0
 sysfs /sys sysfs rw,relatime 0 0
 none /dev tmpfs rw,relatime,mode=755 0 0
 none /dev/pts devpts rw,relatime,mode=600,ptmxmode=000 0 0
 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0

 But the device is actually missing:

 [root@server22 ~]# vzctl exec 2201 ls -l /dev/ploop27939p1
 ls: /dev/ploop27939p1: No such file or directory

 In fact, there doesn't seem to be ANY /dev/ploop devices in this container

 [root@server22 ~]# vzctl exec 2201 ls -l /dev/ploop*
 ls: /dev/ploop18940: No such file or directory
 ls: /dev/ploop18940p1: No such file or directory
 ls: /dev/ploop26517: No such file or directory
 ls: /dev/ploop26517p1: No such file or directory
 ls: /dev/ploop27379: No such file or directory
 ls: /dev/ploop27379p1: No such file or directory
 ls: /dev/ploop27939: No such file or directory
 ls: /dev/ploop27939p1: No such file or directory
 ls: /dev/ploop50951: No such file or directory
 ls: /dev/ploop50951p1: No such file or directory
 ls: /dev/ploop52860: No such file or directory
 ls: /dev/ploop52860p1: No such file or directory
 ls: /dev/ploop58415: No such file or directory
 ls: /dev/ploop58415p1: No such file or directory

 Why does it shows devices when there are none present?   Obviously
 something is messed up, how can we fix this?




 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 Rene C. mailto:ope...@dokbua.com
 Tuesday, January 20, 2015 12:04 PM

 No takers?



 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 Rene C. mailto:ope...@dokbua.com
 Tuesday, January 13, 2015 12:00 PM
 Hm, well I removed the scripts, now I get the error:

 repquota: Can't stat() mounted device /dev/ploop50951p1: No such file
 or directory

 I don't know if this is related at all, it kinda started after a
 recent update to the latest kernel 2.6.32-042stab102.9

 Now if I go into any container on this hardware node, the
 /dev/ploopXXX devices listed in /proc/mount doesn't exist.

 For example:

 root@vps2202 [~]# cat /proc/mounts
 /dev/ploop50951p1 / ext4
 rw,relatime,barrier=1,stripe=256,data=ordered,balloon_ino=12,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group
 0 0
 /dev/simfs /backup simfs rw,relatime 0 0
 proc /proc proc rw,relatime 0 0
 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
 sysfs /sys sysfs rw,relatime 0 0
 none /dev tmpfs rw,relatime,size=7992992k,nr_inodes=1998248 0 0
 none /dev/pts devpts rw,relatime,mode=600,ptmxmode=000 0 0
 root@vps2202 [~]# ll /dev/ploop50951p1
 /bin/ls: /dev/ploop50951p1: No such file or directory

 There are quite a few /dev/ploop* devices under /dev, but not the one
 linked in /proc/mounts.  

 This goes for all containers on this hardware node.  Other nodes not
 yet upgraded to the latest kernel do not have this problem.

 Any ideas?





 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 Kir Kolyshkin mailto:k...@openvz.org
 Friday, December 26, 2014 6:31 PM



 No, the script (and its appropriate symlinks) is (re)created on every
 start (actually mount)
 of a simfs-based container. It is a 

Re: [Users] simfs with ploop?

2015-01-21 Thread Devon B.
I looked at a few more containers.  A couple have the ploop device and a
couple do.  I'm not sure why.  Most that do have the ploop devices have
a bunch of stale ones.  For instance:

# vzctl exec  ls -l /dev/ploop*
b-x--- 1 root root 182, 300625 Jan 18 22:57 /dev/ploop18789p1
b-x--- 1 root root 182, 355073 Jan 18 22:57 /dev/ploop22192p1
b-x--- 1 root root 182, 371265 Jan 18 22:57 /dev/ploop23204p1
b-x--- 1 root root 182, 428529 Jan 18 22:57 /dev/ploop26783p1
b-x--- 1 root root 182, 525073 Jan 18 22:57 /dev/ploop32817p1
b-x--- 1 root root 182, 655857 Jan 18 22:57 /dev/ploop40991p1
b-x--- 1 root root 182, 727537 Jan 18 22:57 /dev/ploop45471p1
brw-rw---T 1 root disk 182, 749697 Jan 18 22:57 /dev/ploop46856p1
b-x--- 1 root root 182, 773185 Jan 18 22:57 /dev/ploop48324p1
b-x--- 1 root root 182, 864529 Jan 18 22:57 /dev/ploop54033p1
b-x--- 1 root root 182, 897201 Jan 18 22:57 /dev/ploop56075p1

The active one is pretty obvious (/dev/ploop46856p1).It doesn't seem
to be specific to node or kernel, the containers are split on the same
system.

Have you filed a bug report?

 Rene C. mailto:ope...@dokbua.com
 Wednesday, January 21, 2015 11:27 AM
 The reason I became aware of the problem was that a cpanel servers
 started sending this message every morning:

 repquota: Can't stat() mounted device /dev/ploop50951p1: No such file
 or directory

 All containers on another hardware node and several on this have the
 devices working correctly within the containers.


 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 Devon B. mailto:devo...@virtualcomplete.com
 Wednesday, January 21, 2015 11:11 AM
 I can't speak as to how to address the issue, but why do you consider
 it messed up?  I logged in to a few nodes and saw the same thing on
 all of them and I don't remember this being any different in the
 past.  The ploop device only exists outside of the container (when
 mounted).  Inside the container is just a reference, no actual device
 exists.

 I don't know enough about the original issue, what are you trying to
 accomplish with the ploop device inside the container?


 Rene C. mailto:ope...@dokbua.com
 Wednesday, January 21, 2015 6:47 AM
 I've gone through all containers and actually some of them work
 correctly, only some are messed up like this.

 Take for example this one:

 [root@server22 ~]# vzctl restart 2201
 Restarting container
 Stopping container ...
 Container was stopped
 Unmounting file system at /vz/root/2201
 Unmounting device /dev/ploop27939
 Container is unmounted
 Starting container...
 Opening delta /vz/private/2201/root.hdd/root.hdd
 Adding delta dev=/dev/ploop27939
 img=/vz/private/2201/root.hdd/root.hdd (ro)
 Adding delta dev=/dev/ploop27939
 img=/vz/private/2201/root.hdd/root.hdd.{7a09b730-f2d6-4b21-b856-0bd6ca420a6e}
 (rw)
 Mounting /dev/ploop27939p1 at /vz/root/2201 fstype=ext4
 data='balloon_ino=12,'
 Container is mounted
 Adding IP address(es): (redacted)
 Setting CPU limit: 400
 Setting CPU units: 50
 Setting CPUs: 4
 Container start in progress...

 So apparently the ploop device should be /dev/ploop/27939. Everything
 seems to work, inside the container this device is referred by
 /proc/mounts

 [root@server22 ~]# vzctl exec 2201 cat /proc/mounts
 /dev/ploop27939p1 / ext4
 rw,relatime,barrier=1,data=ordered,balloon_ino=12 0 0
 proc /proc proc rw,relatime 0 0
 sysfs /sys sysfs rw,relatime 0 0
 none /dev tmpfs rw,relatime,mode=755 0 0
 none /dev/pts devpts rw,relatime,mode=600,ptmxmode=000 0 0
 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0

 But the device is actually missing:

 [root@server22 ~]# vzctl exec 2201 ls -l /dev/ploop27939p1
 ls: /dev/ploop27939p1: No such file or directory

 In fact, there doesn't seem to be ANY /dev/ploop devices in this container

 [root@server22 ~]# vzctl exec 2201 ls -l /dev/ploop*
 ls: /dev/ploop18940: No such file or directory
 ls: /dev/ploop18940p1: No such file or directory
 ls: /dev/ploop26517: No such file or directory
 ls: /dev/ploop26517p1: No such file or directory
 ls: /dev/ploop27379: No such file or directory
 ls: /dev/ploop27379p1: No such file or directory
 ls: /dev/ploop27939: No such file or directory
 ls: /dev/ploop27939p1: No such file or directory
 ls: /dev/ploop50951: No such file or directory
 ls: /dev/ploop50951p1: No such file or directory
 ls: /dev/ploop52860: No such file or directory
 ls: /dev/ploop52860p1: No such file or directory
 ls: /dev/ploop58415: No such file or directory
 ls: /dev/ploop58415p1: No such file or directory

 Why does it shows devices when there are none present?   Obviously
 something is messed up, how can we fix this?




 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 Rene C. mailto:ope...@dokbua.com
 Tuesday, January 20, 2015 12:04 PM

 No takers

Re: [Users] simfs with ploop?

2015-01-21 Thread Devon B.
A couple have the ploop device and a couple don't*

 Devon B. mailto:devo...@virtualcomplete.com
 Wednesday, January 21, 2015 11:52 AM
 I looked at a few more containers.  A couple have the ploop device and
 a couple do.  I'm not sure why.  Most that do have the ploop devices
 have a bunch of stale ones.  For instance:

 # vzctl exec  ls -l /dev/ploop*
 b-x--- 1 root root 182, 300625 Jan 18 22:57 /dev/ploop18789p1
 b-x--- 1 root root 182, 355073 Jan 18 22:57 /dev/ploop22192p1
 b-x--- 1 root root 182, 371265 Jan 18 22:57 /dev/ploop23204p1
 b-x--- 1 root root 182, 428529 Jan 18 22:57 /dev/ploop26783p1
 b-x--- 1 root root 182, 525073 Jan 18 22:57 /dev/ploop32817p1
 b-x--- 1 root root 182, 655857 Jan 18 22:57 /dev/ploop40991p1
 b-x--- 1 root root 182, 727537 Jan 18 22:57 /dev/ploop45471p1
 brw-rw---T 1 root disk 182, 749697 Jan 18 22:57 /dev/ploop46856p1
 b-x--- 1 root root 182, 773185 Jan 18 22:57 /dev/ploop48324p1
 b-x--- 1 root root 182, 864529 Jan 18 22:57 /dev/ploop54033p1
 b-x--- 1 root root 182, 897201 Jan 18 22:57 /dev/ploop56075p1

 The active one is pretty obvious (/dev/ploop46856p1).It doesn't
 seem to be specific to node or kernel, the containers are split on the
 same system.

 Have you filed a bug report?

 Rene C. mailto:ope...@dokbua.com
 Wednesday, January 21, 2015 11:27 AM
 The reason I became aware of the problem was that a cpanel servers
 started sending this message every morning:

 repquota: Can't stat() mounted device /dev/ploop50951p1: No such file
 or directory

 All containers on another hardware node and several on this have the
 devices working correctly within the containers.


 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 Devon B. mailto:devo...@virtualcomplete.com
 Wednesday, January 21, 2015 11:11 AM
 I can't speak as to how to address the issue, but why do you consider
 it messed up?  I logged in to a few nodes and saw the same thing on
 all of them and I don't remember this being any different in the
 past.  The ploop device only exists outside of the container (when
 mounted).  Inside the container is just a reference, no actual device
 exists.

 I don't know enough about the original issue, what are you trying to
 accomplish with the ploop device inside the container?


 Rene C. mailto:ope...@dokbua.com
 Wednesday, January 21, 2015 6:47 AM
 I've gone through all containers and actually some of them work
 correctly, only some are messed up like this.

 Take for example this one:

 [root@server22 ~]# vzctl restart 2201
 Restarting container
 Stopping container ...
 Container was stopped
 Unmounting file system at /vz/root/2201
 Unmounting device /dev/ploop27939
 Container is unmounted
 Starting container...
 Opening delta /vz/private/2201/root.hdd/root.hdd
 Adding delta dev=/dev/ploop27939
 img=/vz/private/2201/root.hdd/root.hdd (ro)
 Adding delta dev=/dev/ploop27939
 img=/vz/private/2201/root.hdd/root.hdd.{7a09b730-f2d6-4b21-b856-0bd6ca420a6e}
 (rw)
 Mounting /dev/ploop27939p1 at /vz/root/2201 fstype=ext4
 data='balloon_ino=12,'
 Container is mounted
 Adding IP address(es): (redacted)
 Setting CPU limit: 400
 Setting CPU units: 50
 Setting CPUs: 4
 Container start in progress...

 So apparently the ploop device should be /dev/ploop/27939. Everything
 seems to work, inside the container this device is referred by
 /proc/mounts

 [root@server22 ~]# vzctl exec 2201 cat /proc/mounts
 /dev/ploop27939p1 / ext4
 rw,relatime,barrier=1,data=ordered,balloon_ino=12 0 0
 proc /proc proc rw,relatime 0 0
 sysfs /sys sysfs rw,relatime 0 0
 none /dev tmpfs rw,relatime,mode=755 0 0
 none /dev/pts devpts rw,relatime,mode=600,ptmxmode=000 0 0
 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0

 But the device is actually missing:

 [root@server22 ~]# vzctl exec 2201 ls -l /dev/ploop27939p1
 ls: /dev/ploop27939p1: No such file or directory

 In fact, there doesn't seem to be ANY /dev/ploop devices in this container

 [root@server22 ~]# vzctl exec 2201 ls -l /dev/ploop*
 ls: /dev/ploop18940: No such file or directory
 ls: /dev/ploop18940p1: No such file or directory
 ls: /dev/ploop26517: No such file or directory
 ls: /dev/ploop26517p1: No such file or directory
 ls: /dev/ploop27379: No such file or directory
 ls: /dev/ploop27379p1: No such file or directory
 ls: /dev/ploop27939: No such file or directory
 ls: /dev/ploop27939p1: No such file or directory
 ls: /dev/ploop50951: No such file or directory
 ls: /dev/ploop50951p1: No such file or directory
 ls: /dev/ploop52860: No such file or directory
 ls: /dev/ploop52860p1: No such file or directory
 ls: /dev/ploop58415: No such file or directory
 ls: /dev/ploop58415p1: No such file or directory

 Why does it shows devices when there are none present?   Obviously
 something is messed up, how can we fix this?




 ___
 Users mailing list
 Users@openvz.org
 https

Re: [Users] OpenVZ and ZFS excellent experience

2015-01-09 Thread Devon B.
It is also important to note that there is wasted space with ZFS as is 
right now if you use advanced format drives (usually 2TB or larger).  
When using ashift=12 (4k sector size) to create a ZFS raid, you'll lose 
about 10-20% of your disk capacity with ZFS depending on the RAID type, 
I don't remember if this affects stripes or not.  It is most noticeable 
in my testing on a RAIDZ2.  When using ashift=9 (512 sector size), 
you'll have the full capacity but performance will suffer on advanced 
format drives.


 I've also opened up a performance bug about the write performance and 
amount of data written to the devices far exceeding the initial write 
size which seems to be pretty noticeable when your writes don't align 
perfectly with the zfs block size.   I had an email with you about this.


However, there are a lot of useful features in ZFS that may, or may not, 
be worth this capacity loss and write performance limitations depending 
on your use case.  I have hopes the performance issues are being worked 
out with the upcoming releases.



Pavel Odintsov mailto:pavel.odint...@gmail.com
Friday, January 9, 2015 3:39 PM
Hello, everybody!

Do somebody have any news about ZFS and OpenVZ experience?

Why not?

Did you checked my comparison table for simfs vs ploop vs ZFS volumes?
You should do it ASAP:
https://github.com/pavel-odintsov/OpenVZ_ZFS/blob/master/openvz_storage_backends.md

Still not interesting?

For example if you have 5Tb disk array (used up to 90%) and using
ploop now you lose about 800GB of disk space!

This data is from real HWN with few hundreds of containers.

I have excellent experience and very good news about ZFS! ZFS on Linux
team will add very important feature, linux quota inside container
(more details here https://github.com/zfsonlinux/zfs/pull/2577

But still no news about ZFS from OpenVZ team (and even from Virtuozza
Core) and we can work separately :)

Fortunately, we do not need any support from vzctl and can use raw
vzctl with some lightweight manuals from my repo:
https://github.com/pavel-odintsov/OpenVZ_ZFS/blob/master/OpenVZ_containers_on_zfs_filesystem.md

I collected all useful information here
https://github.com/pavel-odintsov/OpenVZ_ZFS

Stay tuned! Join to us!

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Live Migration Optimal execution

2014-11-22 Thread Devon B.
Suspend/dump shouldn't have anything to do with the disk size of the 
container AFAIK.   That should only be dumping the memory of the 
system.  Have you tested multiple times?  Maybe a process hung during 
the suspend?  It might also be useful for you to track the size of the 
dump file.



Nipun Arora mailto:nipunarora2...@gmail.com
Saturday, November 22, 2014 12:09 PM
Hi All,

I was wondering if anyone can suggest what is the most optimal way to 
do the following


1. Can anyone clarify if ploop is the best layout for minimum suspend 
time during live migration?


2. I tried migrating a ploop device where I increased the --diskspace 
to 5G, and found that the suspend time taken by live migration 
increased to 57 seconds (mainly undump and restore increased)... 
whereas a 2G diskspace was taking 2-3 seconds suspend time... Is this 
expected?


3. I tried running a write intensive workload, and found that beyond 
100-150Kbps, the suspend time during live migration rapidly increased? 
Is this an expected trend?


I am using vzctl 4.7, and ploop 1.11 in centos 6.5

Thanks
Nipun
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Basic questions about ploop snapshotting

2014-11-18 Thread Devon B.
I believe a single file does exist for the snapshot but it isn't what 
you think.  I believe, if ploop snapshotting is what I believe it to be, 
that file is only used to log the changes to the filesystem while 
preserving the root.hdd file (your original filesystem).  Once a 
snapshot is created, the original root.hdd goes untouched, any changes 
to the filesystem go into the file designated for the snapshots 
(root.hdd.uuid).   Then when a new snapshot file is created, the 
previous snapshot file is preserved and the new snapshot file gets all 
the changes.  The top delta file keeps track of the snapshots.



Simon Barrett mailto:sgbarr...@gmail.com
Tuesday, November 18, 2014 6:02 AM
 I'm not quite following all of your statements. Specifically, I don't
 see why you think you need a base AND a first snapshot. One snapshot is
 sufficient for later restoring. Also, I don't think you need to
 understand the underlying architecture in order to successfully use
 snapshots (it might be helpful or interesting, but not really
 necessary). The command 'snapshot-switch' reliably restores the state of
 the CT it had at the time when you created the snapshot. It's as simple
 as that. Also, when deleting snapshots, vzctl auto-magically does the
 right thing in that it merges or deletes when it is appropriate without
 affecting the other snapshots of the given container.


Sorry about kicking this off again, but this hit to me over the 
weekend and it cleared things up for me.  Please, someone, correct me 
if I'm wrong.


As I understand it now, no single file corresponds to a snapshot and 
to think of it that way will lead you to do something silly with your 
data.  A snapshot is an event.  If you want to think of it in terms of 
files, it's the gap between the root.hdd and the delta.  It's a like a 
HUP in I/O gives you a spot to go back to.  When you delete a 
snapshot, you're deleting the event and this means the files that are 
either side of that discontinuity will be merged/healed.


When you mount a snapshot, for example when doing file based backup 
(https://openvz.org/Ploop/Backup) you're not actually mounting a 
snapshot because there is no snapshot file to mount; you're mounting 
the pre-snapshot data.



Simon
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Shortest guide about running OpenVZ containers on top of ZFS

2014-11-12 Thread Devon B.




Scott Dowdle mailto:dow...@montanalinux.org
Wednesday, November 12, 2014 2:48 PM
Greetings,

- Original Message -

Performance issues aren't the only problem ploop solves... it also 
solves the changing inode issue. When a container is migrated from one 
host to another with simfs, inodes will change... and some services 
don't like that. Also because the size of a ploop disk image is fixed 
(although changeable), the fixed size acts as a quota... so you get 
your quota back if you turned it off.


For me, unless something changes, ZFS isn't a starter because almost 
no one ships with it because of licensing issues.



---
I'm assuming ZFS wouldn't have the inode issue if you used the ZFS 
functions in your migration.  If you decided to use ZFS, you should 
probably use snapshotting, send, and receive during the migration rather 
than using vzmigrate.  ZFS isn't going to be a drop-in replacement if 
you want to get the most out of it.   The problem I have with it is the 
performance issues.   During testing, I have had a lot of random 
performance issues and write overhead.  ZoL (ZFS on Linux) just doesn't 
seem stable enough yet.


Sincerely,
Devon
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Shortest guide about running OpenVZ containers on top of ZFS

2014-11-12 Thread Devon B.





Nick Knutov mailto:m...@knutov.com
Wednesday, November 12, 2014 3:40 PM


ZFS in this case is more alternative to Parallels Cloud Storage which is
closed source and hard to get even for money (I contacted Parallels
sales several times and never got the pricelist from them).

Also, ZFS is good in case of NAS with large amount of SSDs or usual
disks with l2arc cache on SSD. And you can use ploop over ZFS in this
case. I suppose ploop over glusterfs (for example) and most of others
file system with any redundancy (I mean any realization of raid idea)
will be more pain then usable solution, for comparison.


I don't think you can just run ploop over ZFS.   Ploop requires ext4 as 
the host filesystem according to bug 2277: 
https://bugzilla.openvz.org/show_bug.cgi?id=2277
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] convert to ploop

2014-10-24 Thread Devon B.
I think what Kir was getting at was set the diskinodes equal to 65536 x 
GiB before converting.  So for 40GiB, set diskinodes to 2621440




On 10/24/2014 8:05 PM, Nick Knutov wrote:

Thanks, now I understand why this occurred, but what is the easiest way
to convert a lot of different CTs to ploop? As I remember there is no
way to set up unlimited diskinodes or disable them (in case I want to
use CT size when converting to ploop and don't want to think about
inodes at all).


25.10.2014 5:31, Kir Kolyshkin пишет:

[...]
Previously, we didn't support setting diskinodes for ploop, but later we
found
a way to implement it (NOTE: for vzctl create and vzctl convert only).
The trick we use it we create a file system big enough to accomodate the
requested number of inodes, and then use ploop resize (in this case
downsize)
to bring it down to requested amount.

In this case, 1G inodes requirements leads to creation of 16TB filesystem
(remember, 1 inode per 16K). Unfortunately, such huge FS can't be downsized
to as low as 40G, the minimum seems to be around 240G (values printed in
the error message are in sectors which are 512 bytes each).

Solution: please be reasonable when requesting diskinodes for ploop.




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] VETH and VENET

2014-10-01 Thread Devon B.
Read more at http://openvz.org/Veth. OpenVZ supports both veth and venet 
out of box, but some special configuration for routing may be required 
depending on how you use veth. I think the most common method is routing 
through a ethernet bridge.


On 10/1/2014 6:12 PM, Matt wrote:

http://openvz.org/Quick_installation

Does that method support veth just as is?


On Wed, Oct 1, 2014 at 5:00 PM, Devon B. devo...@virtualcomplete.com wrote:

For installing? Right from the home page under Installation shows directions
for RHEL6/CentOS 6:
http://openvz.org/Quick_installation




On 10/1/2014 5:17 PM, Matt wrote:

I need to install openvz on Centos 6 and I need to support both venet
and veth containers.

http://openvz.org/Quick_Installation_CentOS_6

This install method does that but says its not supported/unofficial.
Are there supported install directions for this?

Thanks.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Ploop Random I/O

2014-09-30 Thread Devon B.
You should read more of the resources available at openvz.org: 
http://openvz.org/Ploop/Getting_started#Resizing_a_ploop_image


There are no specific inode settings for ploop, it creates a private 
ext4 filesystem for each container, therefore the inode limit is only 
dependent on the filesystem (ext4).


On 9/30/2014 10:49 AM, Matt wrote:

In ploop, can inodes and disk size easily be increased for a container?

On Wed, Sep 24, 2014 at 7:34 PM, Kir Kolyshkin k...@openvz.org wrote:

On 09/19/2014 11:45 AM, Matt wrote:

I have a container currently using about 150GB of space.  It is very
random I/O hungry.  Has many small files.  Will converting it to ploop
hurt I/O performance?

In case of many small files it might actually improve the performance.

ploop performance is very close to usual FS, except for then the image
is growing -- this operation somewhat slows it down as it needs to allocate
extra blocks and modify the block address table. I guess it's not an issue
in your case.

But don't take my word for it, give it a try yourself!

Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Centos 7

2014-09-11 Thread Devon B.
It will probably be a few months before they have anything available for 
the RHEL7 kernel (3.10) and even longer for it to become stable.   Ploop 
also doesn't support XFS so time will tell if that will change or you'll 
have to continue with ext4.


On 9/11/2014 11:32 AM, Matt wrote:

https://openvz.org/Quick_installation

Any install guides for Centos 7 yet?
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vzmigrate to /vz2 instead of /vz

2014-09-11 Thread Devon B.

On 9/11/2014 7:00 PM, Nick Knutov wrote:

I have old server with usual disks and new server with two ssd which are
smaller size. I have /vz on one disk and /vz2 on another.

I want to live migrate CTs from the old server to specified partition on
the new server but I can't find how to do it. Does anybody know?

You could get dirty and do it manually with ploop send and 
checkpointing.  However, have you tried just using a symlink from 
/vz/private/VEID to /vz2/private/VEID?

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vzmigrate to /vz2 instead of /vz

2014-09-11 Thread Devon B.
Like I said though, use a symlink per VE, not the entire vz2/vz 
directory.  Then you won't have to change anything in the config.  Just 
create a symlink for the virtual servers you want on the second SSD 
prior to migrating.


mkdir /vz2/private/VEID
ln -s /vz2/private/VEID /vz/private/VEID

Then try the migration, does it work?

On 9/11/2014 8:51 PM, Nick Knutov wrote:

I'm not good enough with such openvz internals and was hoped there is
ready solution. I found https://openvz.org/Vzmigrate_filesystem_aware
but it is for older version of vzmigrate.

Yes, I tried symlink and

1) /vz2 - /vz2 as symlink on /vz and back
and I have changed private/root paths in CT conf after
vzmigrate+vzmigrate back and files was not removed after second
vzmigrate (from node where symlink was).

2) /vz - /vz2
looks ok but I have to change pathes in CT config after so CT should be
restarted with downtime.

So, all this does not look good. May be it can be better with mount
--bind, but this is also not a good way.



12.09.2014 5:33, Devon B. пишет:

On 9/11/2014 7:00 PM, Nick Knutov wrote:

I have old server with usual disks and new server with two ssd which are
smaller size. I have /vz on one disk and /vz2 on another.

I want to live migrate CTs from the old server to specified partition on
the new server but I can't find how to do it. Does anybody know?


You could get dirty and do it manually with ploop send and
checkpointing.  However, have you tried just using a symlink from
/vz/private/VEID to /vz2/private/VEID?




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] vzmigrate to /vz2 instead of /vz

2014-09-11 Thread Devon B.
Shouldn't be millions of small files with ploop.  Should just be the 
one: root.hdd.  Where it is mounted shouldn't matter (VE_ROOT).


On 9/11/2014 9:57 PM, Nick Knutov wrote:

I did exactly so.

Migration to symlink is working. And CT is running ok after. But
private/root paths are rewritten after migration to /vz + for simfs with
billions small files running CT from symlink can be slower.

Migration from symlink is also working. With the same issues plus source
folder with CT is not deleted after migration, only symlink.


12.09.2014 7:16, Devon B. пишет:

Like I said though, use a symlink per VE, not the entire vz2/vz
directory.  Then you won't have to change anything in the config.  Just
create a symlink for the virtual servers you want on the second SSD
prior to migrating.

mkdir /vz2/private/VEID
ln -s /vz2/private/VEID /vz/private/VEID

Then try the migration, does it work?

On 9/11/2014 8:51 PM, Nick Knutov wrote:

I'm not good enough with such openvz internals and was hoped there is
ready solution. I found https://openvz.org/Vzmigrate_filesystem_aware
but it is for older version of vzmigrate.

Yes, I tried symlink and

1) /vz2 - /vz2 as symlink on /vz and back
and I have changed private/root paths in CT conf after
vzmigrate+vzmigrate back and files was not removed after second
vzmigrate (from node where symlink was).

2) /vz - /vz2
looks ok but I have to change pathes in CT config after so CT should be
restarted with downtime.

So, all this does not look good. May be it can be better with mount
--bind, but this is also not a good way.



12.09.2014 5:33, Devon B. пишет:

On 9/11/2014 7:00 PM, Nick Knutov wrote:

I have old server with usual disks and new server with two ssd which
are
smaller size. I have /vz on one disk and /vz2 on another.

I want to live migrate CTs from the old server to specified
partition on
the new server but I can't find how to do it. Does anybody know?


You could get dirty and do it manually with ploop send and
checkpointing.  However, have you tried just using a symlink from
/vz/private/VEID to /vz2/private/VEID?

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users