Re: [Users] live migration inside one physical node

2015-09-08 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi!

You can just do "vzctl suspend CTID", move the container to another place and 
then restore it there with "vzctl restore CTID" after you changed the 
configuration file.

On 09/08/2015 07:14 AM, Nick Knutov wrote:
> 
> Is it possible to do live migration between physical disks inside one
> physical node?
> 
> I suppose the answer is still no, so the question is what is possible to
> do for this?
> 

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJV7qOrAAoJELAaqP3QtzpMOw8H/0zpk24NQOFegdb8imrAjn7s
o09iUdQSKEC8rpuDnRNOTwVPWzU8sFNvl8hvjZLZJ4T/SLtwgcZCHuM0++8KjB26
6gSBMagNENWHT0+McdM8zlA1glBQBOq8WtAF6E/0fEwF2Q2gBiVYdulNXJgRgIrf
NH3ArqNWwfB/zpFDtUYu1ZuOXFVXCFWEvyrkZBFnl3W+7MZOpMw4vySMNvFgEGu3
R6SKaQubKinKTB4D0jq2QKWSWQJLad3nV8XbS2iCTqvuXqm1KjIPJi18C3D2m+9M
mHZCX+s7a5nCKHq7G1LowXSea6ftbRg2HsI9ZEDE0SDg5l+LoK3nR/U3WI9uJlg=
=Vw3N
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] live migration inside one physical node

2015-09-08 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Take a look into how vzmigrate works. It's a bash script after all and it isn't 
that hard to reproduce what it really does.

If your container uses simfs file system, you can do an rsync of the files into 
another directory, twice. Once that is done, just suspend the container, make a 
final rsync and restore the container. That way the container was only offline 
for the time of the last rsync which shouldn't take very long.

If your container uses ploop file system, you can do ploop send and ploop 
receive (I never looked into the details how it works, but vzmigrate will help 
you with this), suspend the container and then let ploop finalize the progress. 
You need to resume the container here too once done.

For simfs, please take a look at my bash script here which does simfs -> ploop 
live migration: https://github.com/Fusl/simfs2ploop just to get an idea how you 
can do it with rsync.


On 09/08/2015 03:12 PM, Nick Knutov wrote:
> 
> This way takes time. It's definitely not _live_ migration (
> 
> 
> 08.09.2015 14:00, Kevin Holly [Fusl] ?:
>> Hi!
> 
>> You can just do "vzctl suspend CTID", move the container to another place 
>> and then restore it there with "vzctl restore CTID" after you changed the 
>> configuration file.
> 
>> On 09/08/2015 07:14 AM, Nick Knutov wrote:
> 
>> > Is it possible to do live migration between physical disks inside one
>> > physical node?
> 
>> > I suppose the answer is still no, so the question is what is possible to
>> > do for this?
> 
> 
>> ___
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
> 
> 
> 
> 
> _______
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
> 

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJV7wcXAAoJELAaqP3QtzpMe3cIAIoZgve/Q4/7BzzVKFIJw0Zw
wrAT9oJZZKbWsCMgWEsRAXoPDQjAo5YyVRpgEj4JsAwAdchf+w1txdZu6n6rxexN
mdaTaXHWMZ3xsNYCUVl5IQW6v0+Ny2vGEDkEK+FHzhD1NibnP3IJQfmsEBB8sFD7
e6xccVkeNcw6mZT8IklRcJmlcL2/tZ/y0D3YcqVY2PTm9/ayFkT0t1AZbElR4TJG
pPuEKdDFXN9Q+tftI2YVwTsISU3B2rQtxNIQ/hK8tz35qoQmAu8RZwxtypdTGvXW
cuSRC5SsXl1mNC0RHmrzVTRCw0ojZSIsAjT7yuTTG5F94LV4frNYY6d2M34DN3A=
=mFo0
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] measuring IO inside an openvz container?

2015-09-04 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

AFAIK there is no such tool that displaye you the configured IO options on the 
hardware node.

You can test if ioping (https://github.com/koct9i/ioping) or iotop 
(http://guichaz.free.fr/iotop/) give you some more debug information about this.

On 09/04/2015 02:28 PM, Benjamin Henrion wrote:
> Hi,
> 
> I do have an account on an openvz server, but the IO provided is very
> variable, sometimes my shell is fluid, sometimes it is really slow.
> 
> Is there a test I could run periodically to test the disk IO?
> 
> Best,
> 

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJV6aRFAAoJELAaqP3QtzpMBL4H/164pBrZzOIEnxhYWMxegYV8
wBO6jqDaM0iRegwIQScF6pKMEokFqzk6KmWJCuzvCOFAWE2c+LoJ03w0XDsV/jOq
Z4BRWs5BGbzPOuZhTZt405Kfd/HD2hy0ho3LB/8Bh+CfO4+f94RGEY9FhLPTqzOf
mkinWFwOVAAGfpNMxmrPOEgqeXEyai3nLnA7SyROYjJ9T86inc1JXl6wR12KmV0S
tDtfqblc4IhHf0anBW+uzpiz7Be1vzjj/3vyjRfTmrXxGbFmqzaF8qrwMYSOQ/05
0r6Qtd6565cCTpFvcylRVQtS5UUTfTl+SwgyclgAG0iCK8R5rlfgp7jh6/mxSao=
=UbuR
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Ploop Help

2015-08-06 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

you can list what layout the containers have by using vzlist

For example vzlist -Ho ctid,layout lists all containers with its CTID and the 
layout (possible values are simfs or ploop).

On 08/06/2015 09:47 PM, Matt wrote:
 How do I tell if a container is ploop or simfs?  How do I convert from
 ploop to simfs?
 
 I need to move a few containers from a centos server to a Proxmox
 server that does not support ploop is the reason.
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVw7/VAAoJELAaqP3QtzpMgUgH/0wVC6nxQOO2yRpULNab15wV
koE3JxY2zrgk0DxVRTV0T0IVf068xERG/mXPl/xsa9qB98B0T/K1u9nz0Ncbyf8Y
wy/dyrY4A0VL8Fmpmha8LyT+J6xqwiWlRO5t8E7GMMp3FzvkVyvKu4JavIjQUyaB
PPqrU4We83CRMoufwuaFZq+IMSFJDKkgn9UsKlCkQ83YoA85Zz8vIUmvrSQKO7we
szu8wzO7oKOSyNVEq+RLfw7prc1P6LYKXaHLqMEc6pBIRLRoiXMjcBFMw93vMUjy
P5KiUD2xfbeCTTDY3LydSQNOnID/6p0NtmwRYxrU7PEzTMmLJ5kV0t2/e0KbFCY=
=Rgk5
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] SIMFS users

2015-07-21 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Here!

_ hostcmd vz 'vzlist -Holayout | fgrep simfs' | wc -l
911

On 21.07.2015 09:21, Sergey Bronnikov wrote:
 Hello,
 
 we want find people who still use simfs for OpenVZ containers.
 Do we have such users?
 
 Sergey B.
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVrgf1AAoJELAaqP3QtzpMKZgIAKss9JUeO5YQUhieQrXKWCtp
KcVoba3WVv8NecnA6i21Z3rQYX1UNH7rwcpPQj8kLKjfeJESPSn3ipfa7utPwGwo
e/V1Klas0JD3t/jiOR7Jgq6sQJxi5PjOWLw/tff72IV7czIfU3fH6iQkWayMG4lJ
6M0m2V3D3cH9sjgepOa2T1kfV5gdx8XJBjY/aKyPp82CCnSgCs+/4/VGDsltpZRa
wBylNWdpkJcONGtB9zuL7D0aBYo1I/xJkElwfOatc6Cx86xkmHWQveO8KRfN/vot
oEf8Swsxasd4bn1oSK2s7f9A9MV9S8eo4OAFq3D90ji7Lo+kkmEOhiEFwTy/43Q=
=UAUN
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Beancounter/Accounting file question

2015-07-08 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

the information resets when you reboot the container.

On 2015-07-08 19:11, Mark Johanson wrote:
 I was curious when the beancounter/accounting ( /proc/bc ) information is 
 reset? Ie daily, weekly, monthly, only after reboot, since vm creation, etc?
 
 Have one user who appears to have about 532G in writes, which I am pretty 
 sure is not a daily, or a weekly. Perhaps monthly?
 
 So I am trying to get some tracking on some of our heavy i/o writing users 
 (as well as some other items) and can't find information on how often that 
 information is reset.
 
 Thanks for any assistance.
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVnV+jAAoJELAaqP3QtzpMJmoIAItMAxuf1ErxLiw96USe4h7m
WJdW4cZNXmg+fL6v3Cb00oTBJtGZ7LyISYHTi2hY97o68DR1bwI725Zn5fiubu5u
gwZtIzD0dwkHoB4ZyFcLMdAc28JBhblZklDQernzTdu4Yy/R0EqzvUZlQ1OXBbbb
di63eh4z4yjF4bz7YZuu/ztjNt6yBQb+XVeoNG4dxxmap+dnvYn0b8dTG1YRb+fH
m5Xx4Xmc7+4KKbUezXrdArQZXbHIYOjcRt4ZbrqLLuQLH9UKG/RIIkzTf0PPoqIO
2euoVMa6peTKcCdFlUwX00T8czYGcif7pa4J2SxpNrkZb8UR8IoE4nPAlsQ4lJQ=
=Mhim
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Startup problem

2015-07-07 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

there exists a bug report at bugzilla.redhat.com:

https://bugzilla.redhat.com/show_bug.cgi?id=1177819

If you read through it, there are at least one workarounds for this bug.


On 2015-07-07 11:08, Francois Martin wrote:
 [...]

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVm51DAAoJELAaqP3QtzpM+SQIAKHD62bX/ate4SDHYZdQhO/L
NBNFd99Y3rn8L6RLcOMv4aWH3yVDHebvn0P9cft3sDJyx4X/PNHV2W/mmkKIIB8S
Yd8BMxAVOiHIzJKYECXK9EAIrzP/O7aW4F1rmeb5vK42I/8oWWvK8CAlpNc61pkP
WgLz6huuu97S5RuSsb7JK7gIgVacVg8aBjaAH0nkrjqhJ66SO5ZeSDkucFSNOOyE
7kyv2ovPnH9zMl+4hEohMebK6SzayTt1Dx5PqrSgKjPzfjDAyTpVE/LgaWvh9szX
hLbGk3zAIhNreD8UR7Pqb0FfLwRm5NTBugxImCjn9V9Pq+LoTr8x4P1a9nSdNOM=
=5bQz
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ and IPv6

2015-06-18 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

On 06/18/2015 11:13 PM, Todd Mueller wrote:
 [...]

I'm not entirely sure if you understand what he asks for.

When you do vzctl set CTID --ipadd 2001:1234:1234:1234/64 --save, OpenVZ 
creates a route only for the network address (2001:1234:1234:1234::), but not 
for the entire /64 subnet which is unexpected behaviour already.

Inside the container, OpenVZ automatically adds the network address 
(2001:1234:1234:1234::) to venet0.

2001:1234:1234:1234:: is now reachable from the outside

If you add 2001:1234:1234:1234::1 as an address inside the container, it should 
send a NDP packet to update neighbours about the new IPv6 address (including 
the router) but what we see instead is 2001:1234:1234:1234::1 staying 
unreachable.

OpenVZ in this case treats the 2001:1234:1234:1234::/64 subnet as a single 
address (which is 2001:1234:1234:1234::) instead of the subnet we requested and 
throws away all subnet information.


How we expect it to work instead:

vzctl set CTID --ipadd 2001:1234:1234:1234::/64 --save should add an entire 
/64 route and allow the container to use the entire /64 subnet as outgoing IPv6 
address

vzctl should not by default add an IPv6 address from this subnet OR add 
2001:1234:1234:1234:: by default to the container.

Inside the container we should now be able to add 2001:1234:1234:1234::1 as 
another IPv6 address to venet0 and OpenVZ kernel/the hardware node should 
forward NDP packets for updating neighbours including routers with the new IPv6 
address.

The addresses 2001:1234:1234:1234:: and 2001:1234:1234:1234::1 should now be 
reachable from outside.


I hope that I explained this enough in detail so everyone reading should get an 
idea of what we mean.

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVgz/dAAoJELAaqP3QtzpMLlwH/2nYUll/ZwkkmQbYCZLRWAMZ
h90rdK8zh5MRKTSSlMMO6AEBc3yKiw2F5nSHpAIKS5GuvtRbKs7qkHDM3BruSvg6
W60D0tiYdIQLKFCEgldo5aPmGdJirGA/DV8nq1bEYlQX41nC9rsuvRUPbupIr24n
Y6YN2J34BIaTmkXrAj2kjL+O0h6AUlgTOyitohd8jzmc4aki7jn3rsuoDDC6yN38
mKBp1j992zJnWPHSekmyCYAJzGRlD0JAAamasFTxlkV/5Gd3vZoFIX6tkrj7Wfu2
/CGgGM9+3ubPdW9CJeBPSGO92VxAyomLf7Y68ezVO0zuK/FuqqMPwaADJ64eCLU=
=sbdL
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] New debian-8.0 template and old kernels

2015-06-17 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

A necessary feature to run Debian 8 inside the container has been added in 
042stab094.7:

https://openvz.org/Download/kernel/rhel6/042stab094.7
 ms/hrtimer: Backport CLOCK_BOOTTIME feature, needed for latest systemd (#2937)

You will need to upgrade the kernel in order to get Debian 8 working properly.

On 06/17/2015 10:48 PM, ?? ??? wrote:
 How I see ubuntu-15.04-x86_64 have the same issue on 2.6.32-042stab093.4 
 kernel.
 
 2015-06-17 23:33 GMT+03:00 ?? ??? mrqwe...@gmail.com 
 mailto:mrqwe...@gmail.com:
 
 New template debian-8.0-x86_64 have problems with old kernels?
 
 On 2.6.32-042stab109.12 with vzctl 4.9.2 - it work.
 On 2.6.32-042stab106.6 with vzctl 4.8 - it work.
 But on 2.6.32-042stab093.4 with vzctl 4.8 - init in container not work.
 I upgrade vzctl to 4.9.2, but it not help.
 After start we have only -
 root@debian-8-test:/# ps aux
 USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
 root 1  0.0  1.0  28076  2772 ?Ss   16:01   0:00 init -z  
 
 root 2  0.0  0.0  0 0 ?S16:01   0:00 
 [kthreadd/1225]
 root 3  0.0  0.0  0 0 ?S16:01   0:00 
 [khelper/1225]
 root55  0.0  0.2  27556   752 ?Ss   16:01   0:00 vzctl: 
 pts/0   
 root56  0.0  0.7  20188  2028 pts/0Ss   16:01   0:00 -bash
 root71  0.0  0.4  17436  1140 pts/0R+   16:02   0:00 ps aux
 
 And errors on stop, when it try run /run/initctl
 
 
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVgeEFAAoJELAaqP3QtzpM3N0H/2qksxe0rOfBH/UBX7B3JeX8
IF7EN4ULWjY55njuvfcsrEQxSChfEqdiqS40t62wQ0M+V3IrM/y6FnWvLwBY1ZHh
Ruv2D6W7PuQcq8AG5l93EIHwLiGCV4c3uR2gP930+tUjS6XqXCZIKI5oesUZNRra
vouBtp2JIz3iMwNsekbyTe5AMvXBtAdQs2vWUwcBg+LuuOv6UFsqMHsuUG5rlWgY
HJxY0hQcFWOypsnWuACzoG8JEz0DjgCykcdHHbbOToQBeHPkGFPJFgIc93LYwWTE
qGDShcy+TDERWXDomG5p8qIpEaVz/zJgVoflhG7VUsJIUcKin3nor3VoX2UvNq0=
=D4kJ
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] directory /.cpt_hardlink_dir_a920e4ddc233afddc9fb53d26c392319

2015-05-12 Thread Kevin Holly [Fusl]
On 05/12/2015 11:29 PM, Kir Kolyshkin wrote:
 https://github.com/kolyshkin/vzctl/commit/09e974fa3ac9c4a1

The correct link is https://github.com/kolyshkin/vzctl/commit/09e974fa3ac9c4ab,

or the shortened link https://github.com/kolyshkin/vzctl/commit/09e9,

or the full link 
https://github.com/kolyshkin/vzctl/commit/09e974fa3ac9c4abd42194eec8441a40e63ea991


-- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] IPv6

2015-02-03 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

can you ping the host machine from within the container? What does ip
- -6 r l say? Is there anything in /var/log/messages regarding IPv6?

In my host machines I had to do that in order to get IPv6 working
properly:

for proxy_ndp in /proc/sys/net/ipv6/conf/eth*/proxy_ndp; do echo 1 
$proxy_ndp; done



On 02/03/2015 06:04 PM, Matt wrote:
 Recently IPv6 quit working on my Openvz CentOS 6 box.  No longer
 works on host or containers.  I imagine it was after a yum update.
 Whats weird is if I reboot the OpenVZ host machine IPv6 starts to
 work again on the host and the containers for about 5 minutes but
 then quits again.  Any ideas?
 
 I installed following these directions.
 
 https://openvz.org/Quick_installation
 
 With some changes found here to allow bridged containers and IPv6.
 
 https://openvz.org/Quick_Installation_CentOS_6 
 http://openvz.org/Virtual_Ethernet_device https://openvz.org/IPv6 
 ___ Users mailing list 
 Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
 

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJU0R/DAAoJELAaqP3QtzpM10EH/1tDnPpTx1QdAFeckNryDc2d
VORhi0aehYrIVb3ZGqjyWz7czAd1CrTSQ8bAgXD0c4ADlVf7O8ozOxaJFapB+TOB
JolO/S8191yx3Dj+Ax7Olo9dk42rzN627WXmNy8ufscvM5Pma6oql3UpUJmbCdxo
o0zuaQt6qGN/NxnIt/x6/gHIQH9DxHlQFKTG4BFPP7Wm2WaV8miVbmgs/XqHYW6s
9PuO/C4sg476dWFSK+Smsw2W0gas4f5+MdQLRtRNJSUaZbfjeG2mWwkO9pVj19Lx
oNeCzPsxD2lbgsbOJm3kEg+AMT0KPsTeKzmaIes126J5HO4C4v0aN1VaoTNOQdQ=
=U6Bx
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Understanding resource usages in containers

2014-11-28 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/28/2014 03:47 PM, Nipun Arora wrote:
 Hi folks,
Hi Nipun!

 1. --cpu-limit puts a limit on the max percentage of cpu that can
 be used in the container.
Exactly. 100% is a core, 200% is two cores and so on.
 Does this also limit the number of processors the container has 
 access to?
No, it does not. For that you need --cpus.
For example: Defining --cpus 2 --cpulimit 100 results in processes
being migrated around 2 cores with either a maximum of 50% for each
core (if both cores are used at the same time) or 100% for one core
(if only one core is used at the same time) or something in between.
So, the resulting sum of both cores calculated does never exceed 100%
(75%+25%=100%, or 10%+90%=100%, and so on...).

 2. --io-limit is a way to limit the I/O - How is this measured?
 Both read+write combined?
- --iolimit limits both, read and write. Read and write are combined.
But(!), --iolimit never limits cached read/writes, only direct access
or flushed data/access.

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJUeLDkAAoJELAaqP3QtzpMep0IALp2f76W7581rgkA7+7FDi7U
afqrCbn1pSxHaz+sjwXvaRHncrqznEWSaendDXhA7aGT1kM3PuYpvxrEGNIkooiW
VNSoU/QyQmuflYIlTRvfiLaGh8Ol6pJ+j6L5vZX0UIDht04x9Q2MsMSgYnKRNDjM
NKzOM52zFc7zMxR4Hd9jETLITUTUjpQX3k5BLfeJRZJZc+B3KoKI7tYpY52RAJ4Z
Drp364VUI0XX/KVVC3jexfq9V7BDJUKWT6sqUMiWBEzI46D7e+neLbITohc0V0TD
8kHDeQT+Jna+l4rVIZxwUqV+DGyulz+HMdHAt8fjMe6XA1IWaEdJnF5X8cF34IQ=
=kYhh
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] e3 editor not working inside containers with ploop filesystem

2014-10-29 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

one of our customers contacted us about a software which runs
perfectly fine everywhere, but not in their container.

The following is an strace from the editor within one of my containers
which runs on the ploop filesystem:

http://sprunge.us/Cjda
root@mail:~# strace e3 /etc/network/interfaces  /dev/null
execve(/usr/bin/e3, [e3, /etc/network/interfaces], [/* 18 vars
*/]) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo
...}) = 0
ioctl(0, SNDCTL_TMR_START or TCSETS, {B38400 opost -isig -icanon -echo
...}) = 0
sigaction(SIGCONT, {0x8049bb3, [], 0}, NULL, 0x404cb0) = 0
write(1, \33[?17;0;64c, 11)   = 11
open(/etc/network/interfaces, O_RDONLY) = 3
lseek(3, 0, SEEK_END)   = 1198
lseek(3, 0, SEEK_SET)   = 0
brk(0x81599d0)  = 0x8b89000
fstat(3, 0x805ea74) = -1 EOVERFLOW (Value too
large for defined data type)
write(1, \33[001;001H, 10)= 10
write(1, \33[44m\33[33m\33[1m, 14)= 14
write(1, ERROR  75, 9)= 9
write(1, \33[40m\33[37m\33[0m, 14)= 14
write(1, \33[001;001H, 10)= 10
write(1, \33[001;011H, 10)= 10
read(0, \3, 1)= 1
write(1, \33[001;001H, 10)= 10
write(1, \33[44m\33[33m\33[1m, 14)= 14
write(1, , 0) = 0
write(1, \33[40m\33[37m\33[0m, 14)= 14
write(1, \33[001;011H, 10)= 10
write(1, \33[001;011H, 10)= 10
write(1, \n, 1)   = 1
write(1, \33[?2c, 5)  = 5
unlink(e3##)  = -1 ENOENT (No such file or
directory)
ioctl(0, SNDCTL_TMR_START or TCSETS, {B38400 opost isig icanon echo
...}) = 0
_exit(0)= ?

[root@neko ~]# df /; df -i /
Filesystem  1K-blocks   Used  Available Use% Mounted on
/dev/sda3  5750714048 3175276900 2283317700  59% /
FilesystemInodes   IUsed IFree IUse% Mounted on
/dev/sda3  365150208 2136604 3630136041% /


The same editor works within one of my simfs containers:

http://sprunge.us/bWXV

root@nh1nl:~# df /; df -i /
Filesystem 1K-blocks   Used Available Use% Mounted on
/dev/simfs  52428800 607196  51821604   2% /
Filesystem   Inodes IUsedIFree IUse% Mounted on
/dev/simfs 26214400 21301 261930991% /


Is this a bug in the ploop filesystem or the mentioned editor?


- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJUUVYYAAoJELAaqP3QtzpMHtgH/14JAprw64h2uRJHR+6gLD6i
213a4ym0vH9Mw9zYJGTeTqC9l8ieTXCyMBrCM62lr7JQERdYV8SJZ/4Bx8C4HfTp
8Jt3O1AgWUSh7s39FiGmpRXA+mjCq7HRW31GIzLMEXlgwWyGhBK/mI8D9FbBFvPL
hVFrRATA4e1tTMUOPpN+PeEBVCYmPNSOz2p99gu6X3hMOCithuY0PQxisEp64mrT
LM0kEegv09bYy9JckOAODs00Ee7FRe017J8hjfLm4ppyrePvipx/MHpO8YtVNx+n
F4QLBEPr+ppTTN3JRRrxKwXJuN3w44cTxbPuOwHPB2EaYc7MZHb+1VsHzHDeNLU=
=C21L
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Vzmigrate fails (unsupported filesystem nfsd)

2014-09-12 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

On 09/12/2014 03:23 PM, Corrado Fiore wrote:
 The error, actually, refers to the NFS file system (which is
 present on both nodes), hence my question whether it was a false
 positive (= bug in vzmigrate) or not.
can you try to vzmigrate with --nodeps=cpu parameter?
It *might* crash your host node but I never got that far (it usually
just exits with an error message that both CPUs are incompatible and
starts up the suspended container on the source node again).

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJUExgjAAoJELAaqP3QtzpMQgEIAKa36uzaaBbwsFQxrU5tNrOg
69Sj9OyZbhjG/Q7oX6OOWz335EwWJEVaGlHbwtJkdDy5KrQBU3vsEJddm/j+kvsB
uqaNXsnzQRM4MCxVRq0GX+I6tW8S0uJmeeewF8uqoyuhurTCuymmwjKPnoxqydm4
zehjpZC3+XpOJd84Ye52KNiReNUs3mwaui5EexBwUCIo9DGfVbU6UzFpDOFrFQuo
XR4GeoDtMLJaIEk6UcCb2KPU0Vu/CBZtLlh1GZPd2TkscKyjSv1BgivjxV7IrNbZ
o5viFPPMfPLSKaeosjUj4Y1DgtIcy7W7C3a5NKg9GT5TDhW3BaOHeKVVtKgKR4w=
=1Rvm
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] backup ploop

2014-09-11 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

On 09/12/2014 01:05 AM, Nick Knutov wrote:
 How to backup ploop CT's using and not using snapshots if
 
 1) I do not need the state of running processes, only files 2) But
 I have mysql on some CTs and some files can be broken if they are 
 not synced/closed 3) Backup must be without downtime, I can't stop
 CT  make backup  run CT
 

Maybe that's something for you (code is untested but *should* work)?

# define the container id
CTID=100

# copy any files in /vz/private/$CTID/ away but exclude file
/vz/private/$CTID/root.hdd/root.hdd
rsync ...

# remove all existing snapshots for this container (just to be sure
that the previous backup process did not crash)
for snapid in $(vzctl snapshot-list $CTID -HoUUID); do vzctl
snapshot-delete $CTID --id $snapid; done

# exec any necessary commands inside container BEFORE snapshot
vzctl exec $CTID 'run command to flush mysql onto the disk and
disable disk write access (write to memory for now)'

# snapshot the container
vzctl snapshot $CTID --skip-suspend

# exec any necessary commands inside container AFTER snapshot
vzctl exec $CTID 'run command to flush mysql onto the disk and enable
disk write access again'

# copy your /vz/private/$CTID/root.hdd/root.hdd file
rsync ...

# remove all existing snapshots for this container after we're done
for snapid in $(vzctl snapshot-list $CTID -HoUUID); do vzctl
snapshot-delete $CTID --id $snapid; done

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJUEjDyAAoJELAaqP3QtzpMDk4H/i2gZv81MtVVjVaw0y11muf2
akbhWmqmnfzusj0TzLFDijKTjlVaIxoQuiFNEbXYHr8iDALUMxLpJivUcCfbqxS7
4Tm73Jy+0Csmjsvyt71Kj4J+Oqb8EN8A4QO+MFWzvSH61Kl67ja+6r/Prl0RM+9K
028Q3jPn9P/mrt9oHseehDgnCICn+JGHTwtIS01DFIJZKE5pAoCZg4Q1AfWA7UEO
cV80Q7yd9C5D+iAsmxWL1Zj1uaXJvB36mz2QUDgqJ+LXlhKWPwpr36Lnjjp64Up0
S03tW4EWgL4AurWIKEZbLkVVtAqanz3JFvhDXbN1R2Hx14UGVc0eggiga6LWFvY=
=4qJO
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Vzmigrate fails with error iptables-save exited with 1

2014-09-01 Thread Kevin Holly [Fusl]
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

sorry for TOFU...

Can you try to run iptables-save *inside* the container? This is IIRC
what vzmigrate wants to do.

On 09/01/2014 03:50 AM, Corrado Fiore wrote:
 Dear All,
 
 I'm trying to migrate online a VPS but I keep seeing the following
 error:
 
  Live migrating container... Copying
 top ploop delta with CT suspend Sending
 /vz/private/10004/root.hdd/root.hdd Setting up checkpoint... 
 suspend... get context... Checkpointing completed successfully 
 Dumping container Setting up checkpoint... join context.. dump... 
 Can not dump container: Invalid argument Error: iptables-save
 exited with 1 Checkpointing failed Error: Failed to dump container 
 Resuming... 
 
 Notes:
 
 1)  The container keeps working on the original node after the
 aborted attempt to migrate.
 
 2)  I have tried to manually issue `iptables-save` on the
 destination node and it works.
 
 3)  I know that the destination node must have loaded all the
 required IPtables modules.  These are the relevant lines:
 
 
   ORIGIN NODE  
 
 [root@node16 /]# uname -a Linux node16.x.com
 2.6.32-042stab093.4 #1 SMP Mon Aug 11 18:47:39 MSK 2014 x86_64
 x86_64 x86_64 GNU/Linux
 
 [root@node16 /]# yum list installed | grep ploop ploop.x86_64
 1.12-1 @openvz-utils ploop-lib.x86_64 1.12-1
 @openvz-utils
 
 [root@node16 /]# lsmod | grep ip nf_conntrack_ipv4   9946  2
 nf_nat nf_defrag_ipv4  1531  1 nf_conntrack_ipv4 
 nf_conntrack   80313  4
 vzrst,nf_nat,nf_conntrack_ipv4,vzcpt ip6t_REJECT 4711
 0 ip6table_mangle 3669  0 ip6table_filter 3033  0 
 iptable_mangle  3493  0 iptable_filter  2937  0 
 xt_multiport2716  0 ipt_REJECT  2399  0 
 ip_tables  18119  2 iptable_mangle,iptable_filter 
 ip6_tables 18988  2 ip6table_mangle,ip6table_filter 
 ipv6  322874  3 vzrst,ip6t_REJECT,ip6table_mangle
 
 
   DESTINATION NODE  
 
 [root@node17 /]# uname -a Linux node17.x.com
 2.6.32-042stab093.4 #1 SMP Mon Aug 11 18:47:39 MSK 2014 x86_64
 x86_64 x86_64 GNU/Linux
 
 [root@node17 /]# yum list installed | grep ploop ploop.x86_64
 1.12-1 @openvz-utils ploop-lib.x86_64 1.12-1
 @openvz-utils
 
 [root@node17 /]# lsmod | grep ip iptable_nat 6302  0 
 nf_nat 23213  2 iptable_nat,vzrst nf_conntrack_ipv4
 9946  3 iptable_nat,nf_nat nf_defrag_ipv4  1531  1
 nf_conntrack_ipv4 nf_conntrack   80313  5
 iptable_nat,vzrst,nf_nat,nf_conntrack_ipv4,vzcpt ip6t_REJECT
 4711  0 ip6table_mangle 3669  0 ip6table_filter
 3033  0 iptable_mangle  3493  0 iptable_filter
 2937  0 xt_multiport2716  0 ipt_REJECT
 2399  0 ip_tables  18119  3
 iptable_nat,iptable_mangle,iptable_filter ip6_tables
 18988  2 ip6table_mangle,ip6table_filter ipv6
 322874  3 vzrst,ip6t_REJECT,ip6table_mangle
 
 
 In fact, all the relevant modules are present on the destination
 node AFAICT.  What am I missing here?
 
 Thanks, Corrado Fiore 
 ___ Users mailing list 
 Users@openvz.org https://lists.openvz.org/mailman/listinfo/users
 

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJUBC9IAAoJELAaqP3QtzpMdowH/Rm7A4E/zC855/leVXgJJ8vh
x/e7OEzMaoEmRfd6bYV8BaE26Pt/ejGFpG0+v9Fhjd4C8+5klzskgtMzPRRMSJT5
UkTZ/Mr+4oIZhsS7hf+aQkKTNcosEkScKscpCpcwsWs1+mR2LF62HiS4s/zEBU4M
al+/9F6egIVp8+qdEDtWeiX+aB6QeSqvRvcPNKNyXRLZBKDbBnCHQWIassl520yK
EU5I08n+ct4edCc4BhZKXHpj7/tD35blwBfjsLy54S4iLMKG7XX0qPEAHZxzDk9K
cTWar8ZerTQNH0q9fxgH18mYBXV1gguedtS9ofsBFZN+a9uew566WTMXC1VDUMg=
=rw48
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Ploop filesystem provisioned incorrectly (wrong size)

2014-07-26 Thread Kevin Holly
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 23/07/14 09:34, Kir Kolyshkin wrote:

I just talked with a L3 Supporter of SolusVM and they told me that
this seems to be a well known vzctl bug.

What SolusVM basically does is this on creation of the container:

https://ezcrypt.it/ag9n#QcRKRX8DIFdFKJv6ZNFahTSt

And this is the reinstallation process of the container:

https://ezcrypt.it/cg9n#sHQJeiV7Y6bl6zyBBjS60oLN

Where the config looks exactly like this:

https://ezcrypt.it/dg9n#g2Ico8AknpAtJbAcxkbBQDub


- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJT1AVjAAoJELAaqP3QtzpMU/4H/24DuHnv8winheyx9TBT9BWD
cDCF1Xj2lG6Q5nvdJSNx/qL+6ZODi8lxzFRHvQXTetBBCLxyLFM8LxFGTfYe4Sdb
uPMqjT2Ukva93VtOMtMGMu8nOtBN6/p8z9rL+y6HQ32EwyeiHKznlKsGr0QGm6zd
omPyvb2FdXHlTwxpj9bfjsYZsj8o6AGc8Bezb8Luw2SgBlYLDkCJRfZDMOsaHPov
kP6B3yHuoJJDrxvDvpYGqVNGethQZK+bkcSDbIPls8ZW6LeMQeykEmE1vhbrXGYa
UzmhGEGEjmsmERQ5n6bG/xSQmBLSeJsG+0Obu1A3QgHq/3tIP8LnYK61YHex1kM=
=DWo/
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Ploop filesystem provisioned incorrectly (wrong size)

2014-07-26 Thread Kevin Holly
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 26/07/14 21:45, Kevin Holly wrote:

Oh, and downgrading vzctl to version vzctl-4.6.1-1.x86_64 helped for
now...

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJT1BA2AAoJELAaqP3QtzpM//oH/3CctpINGDFRqGYWnI8ArKwJ
4sACnKnBYWqEu9hayu5Akaqu7BnRBZlPcXdMekoIpEcQrvt2FGBvp2XyXNLJ64hY
HRwLWGuX5K/AAFx6gAQ0bTD4DSe72Nid3ltNsxM1L7CcsED6J/vOSZw9aPznaIlf
CRtBUMr2QrLMcWfS0Jk9t6vMSqpid43X43CE3VOx9nEbjiOLgY0WatWHHjXqw2Ep
4Gpy0rEXmB2qFWCAwn2HsYKc+5rVGSUS4vn7UtJaPmRYQSr84ylEF4SWK4FTU0q9
OPcmou4MdqPcbbEc9rmewTQ2QK34wrYV5NvaX1gyNDBhH+TeBVoxZaK0labEeAM=
=RL4O
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Ploop filesystem provisioned incorrectly (wrong size)

2014-07-23 Thread Kevin Holly
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Kir,

On 23/07/14 00:25, Kir Kolyshkin wrote:
 I failed to reproduce your problem locally:
Okay, then I have to dig further and see where the problem is.
 Final question, just curious -- are you paying to SolusVM? If yes,
 why don't you use their support to help with your issues, relying
 on a free help from OpenVZ community and deveopers instead?
I thought that this may be an OpenVZ bug or that I get more useful
support as SolusVM guys are not really helpful when such things are
happening, but I'll try to contact and talk with them and lets see
where this goes.

Although, thanks for your help so far.


- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTz14fAAoJELAaqP3QtzpMndsH+gLgVJvj3eWB/jLt8bCg6i7W
tny+w5fuKA6A/WLnCpdaNZhnJ/dP98RAPsR3HqJe2bvgzggWoEm3YyENROjumKQZ
9Q9DrjSCnogBNs9wtK2cUZ4wmdqKQBwer327rgaOS51tXFWCO2lMC1p9WncNf95J
qEIGEGYLT26Ul3X4CH26Aroq2R+hMj3ioEeDYxHjpXAgjZCyg4bLWGoOGbdx7eif
Mg6piDB/9VtgpyFyeei9UVTOqzy9R0mAxLvx2h+8eDjkvp8rnh/9wSSAAMGfNGSk
66J9EabpDtZmkpaJh8yGIfi8nE8klhgD09XcU0/mQzbm1A2pBWIZAGTv2cJDe74=
=/rQF
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Ploop filesystem provisioned incorrectly (wrong size)

2014-07-22 Thread Kevin Holly
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I have a customer complaining about his filesystem being too small
(product FS size: 250GB, his FS size: 219GiB / 235GB).

After further debugging this issue I am now here:


Creation of the containers filesystem (I can provide more logs, by
default I have LOG_LEVEL=7 and VERBOSE=1

2014-06-16T19:20:02+0200 vzctl : CT 2671 : Creating image:
/vz/private/2671.tmp/root.hdd/root.hdd size=262144000K
2014-06-16T19:20:02+0200 : Creating delta
/vz/private/2671.tmp/root.hdd/root.hdd bs=2048 size=524288000 sectors v2
2014-06-16T19:20:02+0200 : Adding snapshot
{5fbaabe3-6958-40ff-92a7-860e329aab41}
2014-06-16T19:20:02+0200 : Storing
/vz/private/2671.tmp/root.hdd/DiskDescriptor.xml
2014-06-16T19:20:02+0200 : Opening delta
/vz/private/2671.tmp/root.hdd/root.hdd
2014-06-16T19:20:02+0200 : Adding delta dev=/dev/ploop16059
img=/vz/private/2671.tmp/root.hdd/root.hdd (rw)
2014-06-16T19:20:02+0200 : Running: parted -s /dev/ploop16059 mklabel
gpt mkpart primary 1048576b 268434407423b
2014-06-16T19:20:02+0200 : Running: mkfs -t ext4 -j -b4096
- -Elazy_itable_init,resize=4294967295 -Jsize=128 -i16384 /dev/ploop16059p1
2014-06-16T19:20:03+0200 : Running: /sbin/tune2fs -ouser_xattr,acl -c0
- -i0 /dev/ploop16059p1
2014-06-16T19:20:03+0200 : Creating balloon file
.balloon-c3a5ae3d-ce7f-43c4-a1ea-c61e2b4504e8
2014-06-16T19:20:03+0200 : Mounting /dev/ploop16059p1 at
/vz/private/2671.tmp/root.hdd/root.hdd.mnt fstype=ext4 data=''
2014-06-16T19:20:03+0200 : Unmounting device /dev/ploop16059
2014-06-16T19:20:03+0200 : Opening delta
/vz/private/2671.tmp/root.hdd/root.hdd
2014-06-16T19:20:03+0200 : Adding delta dev=/dev/ploop16059
img=/vz/private/2671.tmp/root.hdd/root.hdd (rw)
2014-06-16T19:20:03+0200 : Mounting /dev/ploop16059p1 at /vz/root/2671
fstype=ext4 data='balloon_ino=12,'


solus-ed-ch01:~# vzctl exec 2671 df -B1 /
Executing command: df -B1 /
Filesystem   1B-blocksUsedAvailable Use% Mounted on
/dev/ploop16059p1 234735247360 86872219648 134441467904  40% /
solus-ed-ch01:~# vzctl exec 2671 df -h /
Executing command: df -h /
Filesystem Size  Used Avail Use% Mounted on
/dev/ploop16059p1  219G   81G  126G  40% /


solus-ed-ch01:~# gdisk -l /dev/ploop16059
GPT fdisk (gdisk) version 0.8.10

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/ploop16059: 4194304000 sectors, 2.0 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): FA4B229E-10F7-4583-8A4F-C7B99A34945D
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 4194303966
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)End (sector)  Size   Code  Name
   12048  4194301951   2.0 TiB 0700  primary



So here you can see that the ploop device got provisioned with 2TB
ploop filesystem size and the container only sees 234735247360 Byte
(235 GB / 219 GiB).


Can someone help me debugging this issue further?

This host node had no crashes yet, all other containers are working
completely fine. It's running the following (patched) kernel version:

solus-ed-ch01:~# uname -a
Linux solus-ed-ch01 2.6.32-042stab090.2 #1 SMP Wed May 21 19:25:03 MSK
2014 x86_64 x86_64 x86_64 GNU/Linux
solus-ed-ch01:~# kcare-uname -a
Linux solus-ed-ch01 2.6.32-042stab092.2 #1 SMP Wed May 21 19:25:03 MSK
2014 x86_64 x86_64 x86_64 GNU/Linux


Thanks in advance for anyone trying to help!


Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTzrsYAAoJELAaqP3QtzpMVS0H/0trv4qJTVcbNEzsKksxifLu
HeFIjVOlkW4rPJ0rx2pRDKO+uayi2ABMg9KGUSX9z50tRX/Uq/rOVp16hJcX/OF+
vwUBeJApSxJ7foJYEpD3O2cwzUE7XcYsAms4MMP1y4VABrtEbfUN7L8ztuhA3QCX
tAsKrWoV/UmgnmJvx3iOWAYibc5qTo8MkuipaI9cgFCb25MgguZtjCUNcSJCQU0d
qFYaDBO/ckweCO7Nb3hICRYXcVgjs/GYQDl4XprZyHZ4AZq+p+dqVxS13yogTiDw
ECmhVbc5QKyPn+HjrpB6arqmanpSJgaM8r9cS9gVGWdrHI6udTqImkkmr0CEFUQ=
=UUZo
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Ploop filesystem provisioned incorrectly (wrong size)

2014-07-22 Thread Kevin Holly
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi again,

I just tried to reinstall this container...
The log of that can be found here:
https://ezcrypt.it/Hd9n#SiP14ajsc8sV0SLiGm1diEc1

There is not much difference with the ploop filesystem now. Same size
(2TB but only 219GiB/235GB available) and ploop info on the
DiskDescriptor.xml also reports 1k-blocks  229233640 (wrong size).

On 22/07/14 21:27, Kevin Holly wrote:
 Hi,
 
 I have a customer complaining about his filesystem being too small
 (product FS size: 250GB, his FS size: 219GiB / 235GB).
 
 After further debugging this issue I am now here:
 
 
 Creation of the containers filesystem (I can provide more logs, by
 default I have LOG_LEVEL=7 and VERBOSE=1
 
 2014-06-16T19:20:02+0200 vzctl : CT 2671 : Creating image:
 /vz/private/2671.tmp/root.hdd/root.hdd size=262144000K
 2014-06-16T19:20:02+0200 : Creating delta
 /vz/private/2671.tmp/root.hdd/root.hdd bs=2048 size=524288000 sectors v2
 2014-06-16T19:20:02+0200 : Adding snapshot
 {5fbaabe3-6958-40ff-92a7-860e329aab41}
 2014-06-16T19:20:02+0200 : Storing
 /vz/private/2671.tmp/root.hdd/DiskDescriptor.xml
 2014-06-16T19:20:02+0200 : Opening delta
 /vz/private/2671.tmp/root.hdd/root.hdd
 2014-06-16T19:20:02+0200 : Adding delta dev=/dev/ploop16059
 img=/vz/private/2671.tmp/root.hdd/root.hdd (rw)
 2014-06-16T19:20:02+0200 : Running: parted -s /dev/ploop16059 mklabel
 gpt mkpart primary 1048576b 268434407423b
 2014-06-16T19:20:02+0200 : Running: mkfs -t ext4 -j -b4096
 -Elazy_itable_init,resize=4294967295 -Jsize=128 -i16384 /dev/ploop16059p1
 2014-06-16T19:20:03+0200 : Running: /sbin/tune2fs -ouser_xattr,acl -c0
 -i0 /dev/ploop16059p1
 2014-06-16T19:20:03+0200 : Creating balloon file
 .balloon-c3a5ae3d-ce7f-43c4-a1ea-c61e2b4504e8
 2014-06-16T19:20:03+0200 : Mounting /dev/ploop16059p1 at
 /vz/private/2671.tmp/root.hdd/root.hdd.mnt fstype=ext4 data=''
 2014-06-16T19:20:03+0200 : Unmounting device /dev/ploop16059
 2014-06-16T19:20:03+0200 : Opening delta
 /vz/private/2671.tmp/root.hdd/root.hdd
 2014-06-16T19:20:03+0200 : Adding delta dev=/dev/ploop16059
 img=/vz/private/2671.tmp/root.hdd/root.hdd (rw)
 2014-06-16T19:20:03+0200 : Mounting /dev/ploop16059p1 at /vz/root/2671
 fstype=ext4 data='balloon_ino=12,'
 
 
 solus-ed-ch01:~# vzctl exec 2671 df -B1 /
 Executing command: df -B1 /
 Filesystem   1B-blocksUsedAvailable Use% Mounted on
 /dev/ploop16059p1 234735247360 86872219648 134441467904  40% /
 solus-ed-ch01:~# vzctl exec 2671 df -h /
 Executing command: df -h /
 Filesystem Size  Used Avail Use% Mounted on
 /dev/ploop16059p1  219G   81G  126G  40% /
 
 
 solus-ed-ch01:~# gdisk -l /dev/ploop16059
 GPT fdisk (gdisk) version 0.8.10
 
 Partition table scan:
   MBR: protective
   BSD: not present
   APM: not present
   GPT: present
 
 Found valid GPT with protective MBR; using GPT.
 Disk /dev/ploop16059: 4194304000 sectors, 2.0 TiB
 Logical sector size: 512 bytes
 Disk identifier (GUID): FA4B229E-10F7-4583-8A4F-C7B99A34945D
 Partition table holds up to 128 entries
 First usable sector is 34, last usable sector is 4194303966
 Partitions will be aligned on 2048-sector boundaries
 Total free space is 4029 sectors (2.0 MiB)
 
 Number  Start (sector)End (sector)  Size   Code  Name
12048  4194301951   2.0 TiB 0700  primary
 
 
 
 So here you can see that the ploop device got provisioned with 2TB
 ploop filesystem size and the container only sees 234735247360 Byte
 (235 GB / 219 GiB).
 
 
 Can someone help me debugging this issue further?
 
 This host node had no crashes yet, all other containers are working
 completely fine. It's running the following (patched) kernel version:
 
 solus-ed-ch01:~# uname -a
 Linux solus-ed-ch01 2.6.32-042stab090.2 #1 SMP Wed May 21 19:25:03 MSK
 2014 x86_64 x86_64 x86_64 GNU/Linux
 solus-ed-ch01:~# kcare-uname -a
 Linux solus-ed-ch01 2.6.32-042stab092.2 #1 SMP Wed May 21 19:25:03 MSK
 2014 x86_64 x86_64 x86_64 GNU/Linux
 
 
 Thanks in advance for anyone trying to help!
 
 
 Best regards
 
 Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 

- -- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTzts/AAoJELAaqP3QtzpMYCwH/R3ZQOMs/Wr/vf6aVrQs1ByP
JVCqoQaHI7j3SX6T12pKBh3GqRThTZVoLHeOFnGNVR7Dc3YXgJgAmscJUQ1P6fVF
x6+5A1OVLrZisvmyLGmL+fFL9kJlBsmMhc1DW7zie31FFcPfMlpE6l6ypC6NW/de
qPN8Pf2cDh4hnr44SKKAs6gLsPl140S2x1VDyVKQH5L97wDUHAeMgNt35G1dKXLw
l6A65fSB1MZGY8vdqDvukJsePFg5j6Vr4vSemEv5IIQzbD52/oHwdjqMzhOVxu9H
kd2SepI4xOdmhCMugNiZ8bUp7xH8n5qc+QgiJ3K1mFHzE97d/5/5D4vtZS+3XLk=
=4Xga
-END PGP SIGNATURE-
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] blog post: yet more live migration goodness

2014-06-21 Thread Kevin Holly
Am 16/06/14 12:28, schrieb Aleksandar Ivanisevic:
 Kir Kolyshkin k...@openvz.org writes:
 
 [...]
 

 http://openvz.livejournal.com/48634.html
 
 Speaking of ploop send/copy, have you ever thought about a continuous
 ploop send as a way of providing redundancy? In essence, ploop send
 would, instead of exiting and running a suspend command, just keep
 sending the changes to remote, so, in case the primary dies, remote
 instance can be started with the latest image. Something like DRBD does,
 but optimized for an OpenVZ use case.
That would be really cool. Something like the kernel permanently
tracking writes and something doing the live sync when a block changes.

From what I've read, ploop migration makes the kernel track writes to
the device and then migrate these blocks in the ploop device as
long/often as needed until no blocks change anymore or always the same
blocks change, so that can be also kind of useful in a live sync of the
ploop image.
 
 If you can provide the feature I would be happy to provide the scripting
 around it.
 
 [...]
 
 

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] tmpfs instead of devtmpfs

2013-08-23 Thread Kevin Holly
Hi Mark,

is it the same template you are using?

I can see /dev/null in combination with vSwap:

root@fwd1:~# mount
/dev/simfs on / type reiserfs (rw,usrquota,grpquota)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=419432k,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1258280k)
devpts on /dev/pts type devpts
(rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)

root@fwd1:~# swapoff -av
swapoff on /dev/null
swapoff: Not superuser.



On 08/22/2013 10:54 PM, Mark J. wrote:
 Have a strange vm issues.
 
 Getting the following errors on a vm's boot:
 
 /bin/bash: line 616: /dev/null: No such device or address
 /bin/bash: line 588: /dev/null: No such device or address
 /bin/bash: line 593: /dev/null: No such device or address
 /bin/bash: line 559: /dev/null: No such device or address
 /bin/bash: line 579: /dev/null: No such device or address
 /bin/bash: line 559: /dev/null: No such device or address
 /bin/bash: line 579: /dev/null: No such device or address
 /bin/bash: line 603: /dev/null: No such device or address
 Setting CPU limit: 400
 Setting CPU units: 1000
 Setting CPUs: 8
 /bin/bash: line 71: /dev/null: No such device or address
 /bin/bash: line 494: /dev/null: No such device or address
 Container start in progress...
 
 in digging around it appears to be caused by the following being set in the 
 vps:
 
 none /dev tmpfs rw,relatime,mode=755 0 0
 
 every other vm has:
 
 none /dev devtmpfs rw,relatime,mode=755 0 0
 
 Anyone happen to know why this one vm might be getting a tmpfs instead of 
 devtmpfs? or where that setting might be? Its only this one vm, all others on 
 the node work without a issue.
 
 Thanks,
 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 

-- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Permission denied when trying to help a stuck destroy

2013-08-20 Thread Kevin Holly
On 06/11/2013 08:00 PM, Kir Kolyshkin wrote:
 On 06/11/2013 02:33 AM, Kevin Holly wrote:
 Hello,

 i already had this problem but forgot how to fix it:

 vztmp-Directory contains parts of a 3 month old container, which was
 destroyed. When i try to find -delete the directory, i get:

 [root@bedrock vzctl-rm-me.guj6L7]# find -delete
 find: cannot delete `./usr/lib/libsh/shsb': Permission denied
 find: cannot delete `./usr/lib/libsh/utilz': Permission denied
 find: cannot delete `./usr/lib/libsh/.owned': Permission denied
 find: cannot delete `./usr/lib/libsh/.sniff': Permission denied
 find: cannot delete `./usr/lib/libsh/.backup': Permission denied
 find: cannot delete `./usr/lib/libsh/.bashrc': Permission denied
 find: cannot delete `./usr/lib/libsh/hide': Permission denied
 find: cannot delete `./usr/lib/libsh': Operation not permitted
 find: cannot delete `./usr/lib': Directory not empty
 find: cannot delete `./usr': Directory not empty
 find: cannot delete `./lib/libsh.so/shhk': Permission denied
 find: cannot delete `./lib/libsh.so/shhk.pub': Permission denied
 find: cannot delete `./lib/libsh.so/bash': Permission denied
 find: cannot delete `./lib/libsh.so/shrs': Permission denied
 find: cannot delete `./lib/libsh.so/shdcf': Permission denied
 find: cannot delete `./lib/libsh.so': Operation not permitted
 find: cannot delete `./lib': Directory not empty
 find: cannot delete `./sbin/ttymon': Operation not permitted
 find: cannot delete `./sbin/ttyload': Operation not permitted
 find: cannot delete `./sbin/ifconfig': Operation not permitted
 find: cannot delete `./sbin': Directory not empty
 find: cannot delete `./etc/sh.conf': Operation not permitted
 find: cannot delete `./etc': Directory not empty

 lsattr shows this:

 [root@bedrock vzctl-rm-me.guj6L7]# lsattr etc/sh.conf
 s---ia---e- etc/sh.conf

 Anyone knows how to fix this/set the right (ch)attr?
 
 Something like chattr -R -i should work. I should probably add it to
 vzctl destroy.
Is it already in one of the stable releases or planned or do you still
consider if it's a good idea to put it there?
 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users

-- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Permission denied when trying to help a stuck destroy

2013-08-20 Thread Kevin Holly
On 08/20/2013 05:49 PM, spameden wrote:
 Obviously your system is compromised.
 
 
 I suggest reinstalling whole container / check host system as well and
 other servers if you had same passwords in use.
When i'm doing a destroy, i'm doing it for the reason, it should be
deleted. As root in the container, you're able to just set the +i bit
and break the destroy post-deletion of the container.


(Testcase done in a container)

root@fwd1:~# touch testfile
root@fwd1:~# chattr +i testfile
root@fwd1:~# rm testfile
rm: remove regular empty file `testfile'? y
rm: cannot remove `testfile': Operation not permitted
root@fwd1:~# chattr -i testfile
root@fwd1:~# rm testfile
rm: remove regular empty file `testfile'? y
root@fwd1:~#


 
 
 2013/8/20 Kevin Holly ope...@lists.dedilink.eu
 mailto:ope...@lists.dedilink.eu
 
 On 06/11/2013 08:00 PM, Kir Kolyshkin wrote:
  On 06/11/2013 02:33 AM, Kevin Holly wrote:
  Hello,
 
  i already had this problem but forgot how to fix it:
 
  vztmp-Directory contains parts of a 3 month old container, which was
  destroyed. When i try to find -delete the directory, i get:
 
  [root@bedrock vzctl-rm-me.guj6L7]# find -delete
  find: cannot delete `./usr/lib/libsh/shsb': Permission denied
  find: cannot delete `./usr/lib/libsh/utilz': Permission denied
  find: cannot delete `./usr/lib/libsh/.owned': Permission denied
  find: cannot delete `./usr/lib/libsh/.sniff': Permission denied
  find: cannot delete `./usr/lib/libsh/.backup': Permission denied
  find: cannot delete `./usr/lib/libsh/.bashrc': Permission denied
  find: cannot delete `./usr/lib/libsh/hide': Permission denied
  find: cannot delete `./usr/lib/libsh': Operation not permitted
  find: cannot delete `./usr/lib': Directory not empty
  find: cannot delete `./usr': Directory not empty
  find: cannot delete `./lib/libsh.so/shhk http://libsh.so/shhk':
 Permission denied
  find: cannot delete `./lib/libsh.so/shhk.pub
 http://libsh.so/shhk.pub': Permission denied
  find: cannot delete `./lib/libsh.so/bash http://libsh.so/bash':
 Permission denied
  find: cannot delete `./lib/libsh.so/shrs http://libsh.so/shrs':
 Permission denied
  find: cannot delete `./lib/libsh.so/shdcf
 http://libsh.so/shdcf': Permission denied
  find: cannot delete `./lib/libsh.so': Operation not permitted
  find: cannot delete `./lib': Directory not empty
  find: cannot delete `./sbin/ttymon': Operation not permitted
  find: cannot delete `./sbin/ttyload': Operation not permitted
  find: cannot delete `./sbin/ifconfig': Operation not permitted
  find: cannot delete `./sbin': Directory not empty
  find: cannot delete `./etc/sh.conf': Operation not permitted
  find: cannot delete `./etc': Directory not empty
 
  lsattr shows this:
 
  [root@bedrock vzctl-rm-me.guj6L7]# lsattr etc/sh.conf
  s---ia---e- etc/sh.conf
 
  Anyone knows how to fix this/set the right (ch)attr?
 
  Something like chattr -R -i should work. I should probably add it to
  vzctl destroy.
 Is it already in one of the stable releases or planned or do you still
 consider if it's a good idea to put it there?
 
  ___
  Users mailing list
  Users@openvz.org mailto:Users@openvz.org
  https://lists.openvz.org/mailman/listinfo/users
 
 --
 Best regards
 
 Kevin Holly - r...@hallowe.lt mailto:r...@hallowe.lt -
 http://hallowe.lt/
 ___
 Users mailing list
 Users@openvz.org mailto:Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 
 
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 

-- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Permission denied when trying to help a stuck destroy

2013-08-20 Thread Kevin Holly
On 08/20/2013 06:23 PM, Kir Kolyshkin wrote:
 On 08/20/2013 08:22 AM, Kevin Holly wrote:
 On 06/11/2013 08:00 PM, Kir Kolyshkin wrote:
 On 06/11/2013 02:33 AM, Kevin Holly wrote:
 Hello,

 i already had this problem but forgot how to fix it:

 vztmp-Directory contains parts of a 3 month old container, which was
 destroyed. When i try to find -delete the directory, i get:

 [root@bedrock vzctl-rm-me.guj6L7]# find -delete
 find: cannot delete `./usr/lib/libsh/shsb': Permission denied
 find: cannot delete `./usr/lib/libsh/utilz': Permission denied
 find: cannot delete `./usr/lib/libsh/.owned': Permission denied
 find: cannot delete `./usr/lib/libsh/.sniff': Permission denied
 find: cannot delete `./usr/lib/libsh/.backup': Permission denied
 find: cannot delete `./usr/lib/libsh/.bashrc': Permission denied
 find: cannot delete `./usr/lib/libsh/hide': Permission denied
 find: cannot delete `./usr/lib/libsh': Operation not permitted
 find: cannot delete `./usr/lib': Directory not empty
 find: cannot delete `./usr': Directory not empty
 find: cannot delete `./lib/libsh.so/shhk': Permission denied
 find: cannot delete `./lib/libsh.so/shhk.pub': Permission denied
 find: cannot delete `./lib/libsh.so/bash': Permission denied
 find: cannot delete `./lib/libsh.so/shrs': Permission denied
 find: cannot delete `./lib/libsh.so/shdcf': Permission denied
 find: cannot delete `./lib/libsh.so': Operation not permitted
 find: cannot delete `./lib': Directory not empty
 find: cannot delete `./sbin/ttymon': Operation not permitted
 find: cannot delete `./sbin/ttyload': Operation not permitted
 find: cannot delete `./sbin/ifconfig': Operation not permitted
 find: cannot delete `./sbin': Directory not empty
 find: cannot delete `./etc/sh.conf': Operation not permitted
 find: cannot delete `./etc': Directory not empty

 lsattr shows this:

 [root@bedrock vzctl-rm-me.guj6L7]# lsattr etc/sh.conf
 s---ia---e- etc/sh.conf

 Anyone knows how to fix this/set the right (ch)attr?
 Something like chattr -R -i should work. I should probably add it to
 vzctl destroy.
 Is it already in one of the stable releases or planned or do you still
 consider if it's a good idea to put it there?

 
 I am not really sure what to do.
 
 From one side, destroy is destroy and should work nevertheless.
 
 From another side, if immutable attribute is set, it was done for a reason.
Yeah, but if you want to destroy the container, it is for the reason,
you don't want to have the data anymore and therefor it should still be
deleted.
Same scenario when destroying a hard drive: If you destroy it, it is for
a good reason, whatever bit is set on any filesystem on the disk :)

 Plus, chattr -R -i on /vz/private/NNN itself takes a few seconds, it's
 kinda
 long operation to perform just in case.
This time it was the +a bit set. Weird...

[root@glowstone vztmp]# lsattr -R .
-e- ./vzctl-rm-me.L8IBqk

./vzctl-rm-me.L8IBqk:
-e- ./vzctl-rm-me.L8IBqk/sbin

./vzctl-rm-me.L8IBqk/sbin:
-u---a---e- ./vzctl-rm-me.L8IBqk/sbin/syslogd

-e- ./vzctl-rm-me.L8IBqk/etc

./vzctl-rm-me.L8IBqk/etc:
sa---e- ./vzctl-rm-me.L8IBqk/etc/ld.so.hash

-e- ./vzctl-rm-me.L8IBqk/usr

./vzctl-rm-me.L8IBqk/usr:
-e- ./vzctl-rm-me.L8IBqk/usr/sbin

./vzctl-rm-me.L8IBqk/usr/sbin:
-u---a---e- ./vzctl-rm-me.L8IBqk/usr/sbin/kfd

-u---a---e- ./vzctl-rm-me.L8IBqk/usr/secure


[root@glowstone vztmp]# chattr -R -a vzctl-rm-me.L8IBqk/
[root@glowstone vztmp]# find -delete


 
 So, I think, such cases should be resolved manually, i.e. by running
 chattr -R -i
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users

-- 
Best regards

Kevin Holly - r...@hallowe.lt - http://hallowe.lt/
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users