Re: [Users] Virtuozzo Cloud Storage 2.0 and erasure codes

2016-09-20 Thread Kirill Korotaev
It should be ready in the first half of October.
Let me continue discussion in private, would be nice to get the understanding 
your workloads.

> On 20 Sep 2016, at 07:09, Corrado Fiore  wrote:
> 
> Dear All,
> 
> we're considering Virtuozzo 7 (commercial) + Virtuozzo Cloud Storage for one 
> of our clients.
> 
> Checking on the Virtuozzo website, I noticed that Cloud Storage 2.0, which 
> adds redundancy based on erasure codes, is in beta.  That's a killer feature 
> for us because it allows triple redundancy without requiring 3x the disk 
> space.  We obviously want it, but at the same time we're tight on time...
> 
> Can anyone share a hint about when Cloud Storage 2.0 will hit RTM?  Just a 
> general timeframe would be enough.
> 
> Thanks,
> Corrado Fiore
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Pstorage and read-only FS problem

2016-01-16 Thread Kirill Korotaev
In situations like yours pstorage will keep running everything fine - and you 
won’t even need to stop containers/VMs on that node.
In other words - it will still use that node for reads (while it is able to 
serve the reads and doesn’t generate read errors), but won’t use for writes.
It will also start copying the data to other nodes in background to make sure 
redundancy level is satisfied.


On 16 Jan 2016, at 12:26, Corrado Fiore  wrote:

Dear All,

we had recently experienced a filesystem corruption issue on a big-ish host 
where the HW RAID card suddenly started to behave erratically.

After a few minutes, the OS (OVZ kernel 2.6.32) put the /vz filesystem in 
read-only, which forced us to shut down the entire node, check the FS with the 
usual tools, etc. and ultimately lead to a significant downtime.

On that node we weren’t using pstorage - just a regular EXT4 file system on top 
of the RAID array… which brings us to my question for the pstorage experts on 
the list.

Let’s assume that instead of a regular EXT4 file system, we were using pstorage 
with 5 or more nodes and triple redundancy.

In a case where the entire FS of a pstorage chunk server suddenly becomes 
read-only, but the server itself is still on and responding to network probes, 
does the pstorage fencing mechanism automatically intervene to exclude that 
chunk server?

In simple terms, I am trying to find out whether in such a case pstorage 
(coupled with HA features of the commercial Virtuozzo) would have prevented 
downtime.  I suspect it would, just searching for some official confirmation 
from the experts.  :-)

Thanks a lot,
Corrado Fiore
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] 2015

2015-12-31 Thread Kirill Korotaev
Many congratulations to whole OpenVZ/Virtuozzo team!!!
And happy New Year!!!

> On 31 дек. 2015 г., at 16:12, Sergey Bronnikov  wrote:
> 
> Hello, everyone!
> 
> There were many changes in OpenVZ project last year: publish source code of
> commercial Virtuozzo, open development process for the new version of 
> Virtuozzo
> etc. Thank you for being with us this year!
> 
> Thanks to all donators, supporters [1] and OpenVZ contributors [2] in 2015.
> See you in 2016! :)
> 
> [1] https://openvz.org/Donations#2015
> [2] https://openvz.org/Contributors#2015
> 
> Regards,
>OpenVZ and Virtuozzo teams
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ and ZFS excellent experience

2015-01-11 Thread Kirill Korotaev
BTW, Pavel one issue which you or others might consider and test well before 
moving to ZFS: 2nd level (i.e. CT user) disk quotas.
One will have to emulate Linux quota APIs and quota files for making this work. 
e.g. some apps like CPanel call quota tools directly and depending on OS 
installed in container these quota tools expect slightly different Linux quota 
behavior/APIs.
In the past there was a lot of problems with that and we even emulated quota 
files via /proc.
So be warned.


 On 11 Jan 2015, at 23:16, Pavel Odintsov pavel.odint...@gmail.com wrote:
 
 Hello!
 
 Because your question is very big I will try to answer in multiple blocks :)
 
 ---
 
 My disk space issue.
 
 24GB is a wasted space from only one container :) Total wasted space
 per server is about 900Gb and it's really terrible for me. Why?
 Because I use server SSD with hardware RAID array's and cost per TB is
 cosmic! I want to give more fast space to customers instead wasting
 it! :)
 
 ---
 
 What I want.
 
 What I want from OpenVZ community? I want share my positive experience
 and build strong community of runners ZFS together with OpenVZ :)
 
 Well, I still have one question to openvz team related with behavior
 of vzctl which is important for ZFS (and another fs too):
 https://bugzilla.openvz.org/show_bug.cgi?id=3166
 
 ---
 
 License issues of ZFS.
 
 License issues is not an critical because installing of ZFS is
 straightforward and do not require any deep integration to system or
 kernel and work on almost any kernel.
 
 And we can call zfs tools (zpool, zfs) without any problems with CDDL
 license of ZFS. But we can't link to libzfs and fortunately we do not
 need this.
 
 ---
 
 Ploop/ext4 vs ZFS
 
 ploop builded on top of ext4 and I compare ZFS with ploop and ext4 and
 many issues notified in my table related with features of both them.
 Obviously, it's completely incorrect to compare ploop (block level
 mapper device) with classic filesystem.
 
 ---
 
 Conclusion
 
 Globally, my speech is not related with ZFS itself. It's about storage
 system for containers. It's most important part of any virtualization
 technology.
 
 Ploop is real revolution in containers world! I really appreciate
 developers of ploop and love them (and will be happy to bring some
 beer to they) :)
 
 But ploop is not a final step of storage system for containers.
 
 And it have big problems described here:
 https://github.com/pavel-odintsov/OpenVZ_ZFS/blob/master/ploop_issues.md
 and everybody should know this issues. Ignoring this issues will
 produce complete data loss on important data!
 
 ZFS is not ideal filesystem for containers too! It lacks of very big
 amount of very important features but it's more reliable and
 featureful than ploop/ext4 :)
 
 
 Thank you!
 
 On Sun, Jan 11, 2015 at 9:57 PM, Scott Dowdle dow...@montanalinux.org wrote:
 Greetings,
 
 - Original Message -
 And I checked my containers with 200% disk overuse from first message
 and got negative result. 24Gb of wasted space is not related with
 cluster size issue.
 
 Yeah, but 24GB is a long way off from your original claim (if I remember 
 correctly) of about 900GB... but those probably aren't comparing the same 
 things anyway.
 
 I'm lost... because ploop and zfs are not, so far as I can tell, competing 
 filesystems on the same level.  zfs is competes with other filesystems like 
 ext4 or xfs... whereas for OpenVZ, so far as I know, there isn't a 
 disk-file-as-disk competitor.  Given the popularity and stability of the 
 current qcow2 format popularized by KVM/qemu... and the large number of 
 tools compatible with qcow2 (see libguestfs)... I'm wondering if it would be 
 valuable to add qcow2 support to OpenVZ?
 
 You are currently using zfs with OpenVZ, correct?  And you didn't have to 
 modify any of the OpenVZ tools in order to do so, correct?  If that is the 
 case, what is it you want from the OpenVZ project with regards to zfs?
 
 So far as I'm concerned the license incompatiblity with the zfs/openzfs 
 makes it where it can not be distributed with stuff licensed under the 
 GPL... so I don't really see a way for OpenVZ to ever ship a zfs-enabled 
 kernel... but yeah, if needed they could add support for it in the tools if 
 that makes sense.  I'm unclear on what you are looking for other than 
 turning the OpenVZ mailing list into a zfs advocacy group.
 
 I do however appreciate you metering wasted disk space by ploop as 
 additional data the OpenVZ devs can work with but as long as ploop isn't 
 using more disk space than the max size of the container disk, I don't 
 really see a problem.  While it means one can't over-subscribe the physical 
 disk as much... excessive over-subscription is not ideal either... with 
 wasted space acting as a sort of pre-allocation buffer... and not actually 
 wasted unless the container's disk isn't going to grow in the future.
 
 I'd also like to see a comparison between ploop wasted space and that of 
 qcow2... 

Re: [Users] OpenVZ and ZFS excellent experience

2015-01-10 Thread Kirill Korotaev
Pavel,

it’s impossible to analyze it just by `du` and `df` output, so please give me 
access if you want me to take a look into it.
(e.g. if I would create 10 million of 1KB files du would show me 10GB while 
ext4 (and most other file systems) would allocate 40GB in reality assuming 4KB 
block size)

Thanks,
Kirill


 On 10 Jan 2015, at 00:54, Pavel Odintsov pavel.odint...@gmail.com wrote:
 
 Thank you, Kirill! I am grateful for your answer!
 
 I reproduced this issue specially for you on one container with 2.4
 times (240% vs 20%) overuse.
 
 I do my tests with current vzctl and ploop 1.12.2 (with fixed
 http://bugzilla.openvz.org/show_bug.cgi?id=3156).
 
 Please check this gist:
 https://gist.github.com/pavel-odintsov/b2162c0f7588bb8e5c15
 
 I can't describe this behavior without complying on ext4 data But
 I I will be very happy if you fix it :)
 
 On Sat, Jan 10, 2015 at 12:29 AM, Kirill Korotaev d...@parallels.com wrote:
 
 On 09 Jan 2015, at 21:39, Pavel Odintsov pavel.odint...@gmail.com wrote:
 
 Hello, everybody!
 
 Do somebody have any news about ZFS and OpenVZ experience?
 
 Why not?
 
 Did you checked my comparison table for simfs vs ploop vs ZFS volumes?
 You should do it ASAP:
 https://github.com/pavel-odintsov/OpenVZ_ZFS/blob/master/openvz_storage_backends.md
 
 Still not interesting?
 
 For example if you have 5Tb disk array (used up to 90%) and using
 ploop now you lose about 800GB of disk space!
 
 Well, AFAIR we simply have a threshold that ploop is not compacted until 
 it’s size is 20% bigger then it should be…
 Also you can try smaller ploop block size. Anyway, my point is that it has 
 nothing to do with ext4 metadata as stated in your table.
 
 
 This data is from real HWN with few hundreds of containers.
 
 I have excellent experience and very good news about ZFS! ZFS on Linux
 team will add very important feature, linux quota inside container
 (more details here https://github.com/zfsonlinux/zfs/pull/2577
 
 But still no news about ZFS from OpenVZ team (and even from Virtuozza
 Core) and we can work separately :)
 
 Fortunately, we do not need any support from vzctl and can use raw
 vzctl with some lightweight manuals from my repo:
 https://github.com/pavel-odintsov/OpenVZ_ZFS/blob/master/OpenVZ_containers_on_zfs_filesystem.md
 
 I collected all useful information here
 https://github.com/pavel-odintsov/OpenVZ_ZFS
 
 Stay tuned! Join to us!
 
 --
 Sincerely yours, Pavel Odintsov
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 
 
 
 -- 
 Sincerely yours, Pavel Odintsov
 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Curious about ploop performance results.

2014-05-05 Thread Kirill Korotaev
Oh, ssds have their own effects ))
E.g. Compression, garbage collection (which may decrease performance many many 
times periodically etc.). So always analyze that. Performance is not easy and 
not black and white.

Sent from my iPhone

On 05 мая 2014 г., at 4:44, jjs - mainphrame 
j...@mainphrame.commailto:j...@mainphrame.com wrote:

Interesting - I'll check out FIO.

I wondered about the possibility of disk placement, and created the 3rd and 4th 
CTs in the reverse order of the 1st two, but regardless, the 2 ploopfs-based 
CTs  performed better than the 2 simfs-based CTs. (they are all ext4 
underneath) The CTs are contained in a partition at the end of the disk, which 
somewhat limits the effect of disk placement. I suppose a better test would be 
to eliminate the variables of rotational media by using SSDs for future testing,

J J




On Sat, May 3, 2014 at 11:50 PM, Kirill Korotaev 
d...@parallels.commailto:d...@parallels.com wrote:
Forget about iozone - it benchmarks cached i/o and small data sets, so 
essentially it measures memory / syscall speeds. On larger data sets it 
measures mix of ram and real i/o and a lot depends on previous cache state.

Fio is a better tool.

Actually the most likely explanation for your effect is non-uniform disk 
performance across the plate. You can find that for rotational media 
performance at beginning of block device is almost about twice faster than at 
the end (rotational speed is the same, but velocity is obviously faster on 
inner tracks).
So you can verify that by dumping extent info by dumpfs. Accurate benchmarking 
would require a small localized partition for both tests to make sure 
performance can't vary due to this effect.

Sent from my iPhone

On 04 мая 2014 г., at 1:06, jjs - mainphrame 
j...@mainphrame.commailto:j...@mainphrame.com wrote:

I did some benchmarks on newly created CTs with iozone, and the results were 
probably more in line with what you'd expect.

The simfs-based CT was about 5% faster on write, and the ploop-based CT was 
about 5% faster on re-write, read, and re-read. The results are repeatable.


Regards,

J J


On Sat, May 3, 2014 at 11:53 AM, jjs - mainphrame 
j...@mainphrame.commailto:j...@mainphrame.com wrote:
I am continuing to do testing as time allows. Lat night I ran sysbench fileio 
tests, and again, the ploop CT yielded better performance then either then 
simfs CT or the vzhost. It wasn't as drastic a difference as the dbench 
results, but the difference was there. I'll continue in this vein with freshly 
created CTs. The machine was just built a few days ago, it's quiescent, it's 
doing nothing except hosting a few vanilla CTs.

As for the rules of thumb, I can tell you that the results are 100% repeatable. 
But explainable, ah, that's the thing. still working on that.

Regards,

J J



On Sat, May 3, 2014 at 11:31 AM, Kir Kolyshkin 
k...@openvz.orgmailto:k...@openvz.org wrote:
On 05/02/2014 04:38 PM, jjs - mainphrame wrote:
Thanks Kir, the /dev/zero makes sense I suppose. I tried with /dev/random but 
that blocks pretty quickly - /dev/urandom is better, but still seems to be a 
bottleneck.

You can use a real file on tmpfs.

Also, in general, there are very many factors that influence test results. 
Starting from the cron jobs and other stuff (say, network activity) that runs 
periodically or sporadically and spoils your results, to the cache state (you 
need to use vm_drop_caches, or yet better, reboot between tests), to the 
physical place on disk where your data is placed (rotating hdds tend to be 
faster at the first sectors compared to the last sectors, so ideally you need 
to do this on a clean freshly formatted filesystem). There is much more to it, 
can be some other factors, too. The rule of thumb is results need to be 
reproducible and explainable.

Kir.



As for the dbench results, I'd love to hear what results others obtain from the 
same test, and/or any other testing approaches that would give a more 
acceptable answer.

Regards,

J J



On Fri, May 2, 2014 at 4:01 PM, Kir Kolyshkin 
k...@openvz.orgmailto:k...@openvz.org wrote:
On 05/02/2014 03:00 PM, jjs - mainphrame wrote:
Just for kicks, here are the data from the tests. (these were run on a rather 
modest old machine)

mime-attachment.png


Here are the raw dbench data:


#clientsvzhost  simfs CTploop CT
-
1   11.1297MB/sec   9.96657MB/sec   19.7214MB/sec
2   12.2936MB/sec   14.3138MB/sec   23.5628MB/sec
4   17.8909MB/sec   16.0859MB/sec   45.1936MB/sec
8   25.8332MB/sec   22.9195MB/sec   84.2607MB/sec
16  32.1436MB/sec   28.921MB/sec155.207MB/sec
32  35.5809MB/sec   32.1429MB/sec   206.571MB/sec
64  34.3609MB/sec   29.9307MB/sec   221.119MB/sec

Well, I can't explain this, but there's probably something wrong

Re: [Users] Curious about ploop performance results.

2014-05-04 Thread Kirill Korotaev
Forget about iozone - it benchmarks cached i/o and small data sets, so 
essentially it measures memory / syscall speeds. On larger data sets it 
measures mix of ram and real i/o and a lot depends on previous cache state.

Fio is a better tool.

Actually the most likely explanation for your effect is non-uniform disk 
performance across the plate. You can find that for rotational media 
performance at beginning of block device is almost about twice faster than at 
the end (rotational speed is the same, but velocity is obviously faster on 
inner tracks).
So you can verify that by dumping extent info by dumpfs. Accurate benchmarking 
would require a small localized partition for both tests to make sure 
performance can't vary due to this effect.

Sent from my iPhone

On 04 мая 2014 г., at 1:06, jjs - mainphrame 
j...@mainphrame.commailto:j...@mainphrame.com wrote:

I did some benchmarks on newly created CTs with iozone, and the results were 
probably more in line with what you'd expect.

The simfs-based CT was about 5% faster on write, and the ploop-based CT was 
about 5% faster on re-write, read, and re-read. The results are repeatable.


Regards,

J J


On Sat, May 3, 2014 at 11:53 AM, jjs - mainphrame 
j...@mainphrame.commailto:j...@mainphrame.com wrote:
I am continuing to do testing as time allows. Lat night I ran sysbench fileio 
tests, and again, the ploop CT yielded better performance then either then 
simfs CT or the vzhost. It wasn't as drastic a difference as the dbench 
results, but the difference was there. I'll continue in this vein with freshly 
created CTs. The machine was just built a few days ago, it's quiescent, it's 
doing nothing except hosting a few vanilla CTs.

As for the rules of thumb, I can tell you that the results are 100% repeatable. 
But explainable, ah, that's the thing. still working on that.

Regards,

J J



On Sat, May 3, 2014 at 11:31 AM, Kir Kolyshkin 
k...@openvz.orgmailto:k...@openvz.org wrote:
On 05/02/2014 04:38 PM, jjs - mainphrame wrote:
Thanks Kir, the /dev/zero makes sense I suppose. I tried with /dev/random but 
that blocks pretty quickly - /dev/urandom is better, but still seems to be a 
bottleneck.

You can use a real file on tmpfs.

Also, in general, there are very many factors that influence test results. 
Starting from the cron jobs and other stuff (say, network activity) that runs 
periodically or sporadically and spoils your results, to the cache state (you 
need to use vm_drop_caches, or yet better, reboot between tests), to the 
physical place on disk where your data is placed (rotating hdds tend to be 
faster at the first sectors compared to the last sectors, so ideally you need 
to do this on a clean freshly formatted filesystem). There is much more to it, 
can be some other factors, too. The rule of thumb is results need to be 
reproducible and explainable.

Kir.



As for the dbench results, I'd love to hear what results others obtain from the 
same test, and/or any other testing approaches that would give a more 
acceptable answer.

Regards,

J J



On Fri, May 2, 2014 at 4:01 PM, Kir Kolyshkin 
k...@openvz.orgmailto:k...@openvz.org wrote:
On 05/02/2014 03:00 PM, jjs - mainphrame wrote:
Just for kicks, here are the data from the tests. (these were run on a rather 
modest old machine)

mime-attachment.png


Here are the raw dbench data:


#clientsvzhost  simfs CTploop CT
-
1   11.1297MB/sec   9.96657MB/sec   19.7214MB/sec
2   12.2936MB/sec   14.3138MB/sec   23.5628MB/sec
4   17.8909MB/sec   16.0859MB/sec   45.1936MB/sec
8   25.8332MB/sec   22.9195MB/sec   84.2607MB/sec
16  32.1436MB/sec   28.921MB/sec155.207MB/sec
32  35.5809MB/sec   32.1429MB/sec   206.571MB/sec
64  34.3609MB/sec   29.9307MB/sec   221.119MB/sec

Well, I can't explain this, but there's probably something wrong with the test.



Here is the script used to invoke dbench:

HOST=`uname -n`
WD=/tmp
FILE=/usr/share/dbench/client.txt

for i in 1 2 4 8 16 32 64
do
dbench -D $WD -c $FILE $i dbench-${HOST}-${i}
done

Here are the dd commands and outputs:

OPENVZ HOST

[root@vzhost ~]# dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 11.813 s, 45.4 MB/s
[root@vzhost ~]# df -T
Filesystem Type  1K-blocksUsed Available Use% Mounted on
/dev/sda2  ext4   20642428 2390620  17203232  13% /
tmpfs  tmpfs952008   0952008   0% /dev/shm
/dev/sda1  ext2 482922   68436389552  15% /boot
/dev/sda4  ext4   51633780 3631524  45379332   8% /local
[root@vzhost ~]#


PLOOP CT

root@vz101:~# dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 2.50071 

Re: [Users] Cloud Storage for OpenVZ Containers

2014-01-29 Thread Kirill Korotaev
Edward,

can you send me in private email output of:
# pstorage -c cluster stat
output?

Do you have a skype?

Thanks,
Kirill



On 29 Jan 2014, at 10:26, Edward Konetzko 
konet...@gmail.commailto:konet...@gmail.com wrote:

On 01/28/2014 09:51 AM, Kir Kolyshkin wrote:
On 28 January 2014 02:55, Kirill Korotaev 
d...@parallels.commailto:d...@parallels.com wrote:
 On 25 Jan 2014, at 07:38, Rene C. 
 ope...@dokbua.commailto:ope...@dokbua.com wrote:


 Hi,

 I read the website about the cloud storage and I found some words, which 
 seems familiar for me.

 May I ask, which filesystem do you use to be able to regularly scrub and 
 self-heal the filesystem?

 Personaly I use zfsonlinux in production for a long time now and I am very 
 satisfied with it, and based on your description, it seems you should use 
 something like that and something on top of the native filesystem to get a 
 cloud storage.

 Or you use a ceph or alike filesystem, which has similar capabilities with 
 cloud features.

It’s more like a ceph. Data is stored in a distributed way, so unlike to zfs 
you have access to the data even in case of node failure (crash, CPU/memory 
fault etc.) and access is available from ANY cluster node.
As such we store the data and maintain checksums on every node and can do 
periodic scrubbing of the data.

Just to clarify -- this is Parallels own distributed/cloud filesystem, not CEPH 
or GlusterFS,
but similar to. For more info, check the links at 
https://openvz.org/Parallels_Cloud_Storage#External_links




___
Users mailing list
Users@openvz.orgmailto:Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Setup a cluster using Centos 6.5 64bit, fresh install in KVM instances.  I 
wanted to test functionality not actual speed.

All software was latest as of last night and I followed the quick how to here 
https://openvz.org/Parallels_Cloud_Storage

Everything works great until I try to create an instance using the command 
vzctl create 101 --layout ploop --ostemplate centos-6-x86_64 --private 
/pcs/containers/101 from the docs.

About one mb of data is written to disk and then it just hangs.  The following 
is output from dmesg

[  360.414242] INFO: task vzctl:1646 blocked for more than 120 seconds.
[  360.414770] echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this 
message.
[  360.415406] vzctl D 88007e444500 0  1646   16110 
0x0084
[  360.415418]  88007ea59a68 0086 8800 
06b62934b8c0
[  360.415428]   88007e9f2ad0 5eaa 
ad17694d
[  360.415437]  0ad7ef74 81a97b40 88007e444ac8 
0001eb80
[  360.415452] Call Trace:
[  360.415492]  [81517353] io_schedule+0x73/0xc0
[  360.415516]  [811f39b3] wait_on_sync_kiocb+0x53/0x80
[  360.415537]  [a04dbf47] fuse_direct_IO+0x167/0x230 [fuse]
[  360.415558]  [8112e948] mapping_direct_IO+0x48/0x70
[  360.415567]  [811301a6] generic_file_direct_write_iter+0xf6/0x170
[  360.415576]  [81130c8e] __generic_file_write_iter+0x32e/0x420
[  360.415585]  [81130e05] __generic_file_aio_write+0x85/0xa0
[  360.415594]  [81130ea8] generic_file_aio_write+0x88/0x100
[  360.415605]  [a04da085] fuse_file_aio_write+0x185/0x430 [fuse]
[  360.415623]  [811a530a] do_sync_write+0xfa/0x140
[  360.415641]  [8109d930] ? autoremove_wake_function+0x0/0x40
[  360.415655]  [812902da] ? strncpy_from_user+0x4a/0x90
[  360.415664]  [811a55e8] vfs_write+0xb8/0x1a0
[  360.415671]  [811a5ee1] sys_write+0x51/0x90
[  360.415681]  [8100b102] system_call_fastpath+0x16/0x1b

Even just trying to create a 10k file with dd causes a task to hang.  dd 
if=/dev/zero of=/pcs/test.junk bs=1k count=10


Any ideas? Anymore info you would like for debugging.
___
Users mailing list
Users@openvz.orgmailto:Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Cloud Storage for OpenVZ Containers

2014-01-29 Thread Kirill Korotaev
Edward, got it - there is a small threshold (10GB) on minimum free space on 
CS’es (reserved for different cases include recovery),
you have ~10GB per CS so you hit this threshold immediately.

Most likely you run from inside VMs, right? Just increase disk space available 
to CS then.


On 29 Jan 2014, at 21:04, Edward Konetzko 
konet...@gmail.commailto:konet...@gmail.com wrote:

[konetzed@ovz2 ~]$ sudo pstorage -c test_cluster stat
connected to MDS#3
Cluster 'test_cluster': healthy
Space: [OK] allocatable 28GB of 35GB, free 31GB of 35GB
MDS nodes: 3 of 3, epoch uptime: 10h 25m
CS nodes:  3 of 3 (3 avail, 0 inactive, 0 offline)
License: [Error] License not loaded, capacity limited to 100Gb
Replication:  1 norm,  1 limit
Chunks: [OK] 1 (100%) healthy,  0 (0%) standby,  0 (0%) degraded,  0 (0%) 
urgent,
 0 (0%) blocked,  0 (0%) pending,  0 (0%) offline,  0 (0%) 
replicating,
 0 (0%) overcommitted,  0 (0%) deleting,  0 (0%) void
FS:  10KB in 2 files, 2 inodes,  1 file maps,  1 chunks,  1 chunk replicas
IO:   read 0B/s (  0ops/s), write 0B/s (  0ops/s)
IO total: read   0B (0ops), write   0B (0ops)
Repl IO:  read 0B/s, write: 0B/s
Sync rate:   0ops/s, datasync rate:   0ops/s

MDSID STATUS   %CTIME   COMMITS   %CPUMEM   UPTIME HOST
1 avail  3.1%   1/s   0.1%14m   9h 58m ovz1.home.int:2510
2 avail  2.5%   0/s   0.0%14m   9h 14m ovz2.home.int:2510
M   3 avail  3.0%   1/s   0.3%15m  10h 25m ovz3.home.int:2510

 CSID STATUS  SPACE   FREE REPLICAS IOWAIT IOLAT(ms) QDEPTH HOST
 1025 active   11GB   10GB0 0%   0/00.0 ovz1.home.int
 1026 active   11GB   10GB0 0%   0/00.0 ovz2.home.int
 1027 active   11GB   10GB1 0%   0/00.0 ovz3.home.int

 CLID   LEASES READWRITE RD_OPS WR_OPS FSYNCS IOLAT(ms) HOST
 2060  0/0 0B/s 0B/s 0ops/s 0ops/s 0ops/s   0/0 
ovz3.home.int
 2065  0/1 0B/s 0B/s 0ops/s 0ops/s 0ops/s   0/0 
ovz1.home.int

I do have skype but I have meetings all day for work and cant be on a computer 
after.  I may have time tomorrow if that would work.  I am in the central time 
zone.

Edward


On 01/29/2014 03:14 AM, Kirill Korotaev wrote:
Edward,

can you send me in private email output of:
# pstorage -c cluster stat
output?

Do you have a skype?

Thanks,
Kirill



On 29 Jan 2014, at 10:26, Edward Konetzko 
konet...@gmail.commailto:konet...@gmail.com wrote:

On 01/28/2014 09:51 AM, Kir Kolyshkin wrote:
On 28 January 2014 02:55, Kirill Korotaev 
d...@parallels.commailto:d...@parallels.com wrote:
 On 25 Jan 2014, at 07:38, Rene C. 
 ope...@dokbua.commailto:ope...@dokbua.com wrote:


 Hi,

 I read the website about the cloud storage and I found some words, which 
 seems familiar for me.

 May I ask, which filesystem do you use to be able to regularly scrub and 
 self-heal the filesystem?

 Personaly I use zfsonlinux in production for a long time now and I am very 
 satisfied with it, and based on your description, it seems you should use 
 something like that and something on top of the native filesystem to get a 
 cloud storage.

 Or you use a ceph or alike filesystem, which has similar capabilities with 
 cloud features.

It’s more like a ceph. Data is stored in a distributed way, so unlike to zfs 
you have access to the data even in case of node failure (crash, CPU/memory 
fault etc.) and access is available from ANY cluster node.
As such we store the data and maintain checksums on every node and can do 
periodic scrubbing of the data.

Just to clarify -- this is Parallels own distributed/cloud filesystem, not CEPH 
or GlusterFS,
but similar to. For more info, check the links at 
https://openvz.org/Parallels_Cloud_Storage#External_links




___
Users mailing list
Users@openvz.orgmailto:Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Setup a cluster using Centos 6.5 64bit, fresh install in KVM instances.  I 
wanted to test functionality not actual speed.

All software was latest as of last night and I followed the quick how to here 
https://openvz.org/Parallels_Cloud_Storage

Everything works great until I try to create an instance using the command 
vzctl create 101 --layout ploop --ostemplate centos-6-x86_64 --private 
/pcs/containers/101 from the docs.

About one mb of data is written to disk and then it just hangs.  The following 
is output from dmesg

[  360.414242] INFO: task vzctl:1646 blocked for more than 120 seconds.
[  360.414770] echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this 
message.
[  360.415406] vzctl D 88007e444500 0  1646   16110 
0x0084
[  360.415418]  88007ea59a68 0086 8800 
06b62934b8c0
[  360.415428]   88007e9f2ad0 5eaa 
ad17694d
[  360.415437]  0ad7ef74

Re: [Users] Cloud Storage for OpenVZ Containers

2014-01-27 Thread Kirill Korotaev
On 25 Jan 2014, at 07:38, Rene C. ope...@dokbua.com wrote:

 I don't understand - it says cloud storage for openvz but in the
 documentation it says:
 
 Parallels Cloud Storage is available as a TECHNOLOGY PREVIEW ONLY for
 OpenVZ users and can't be licensed for production.
 To unlock for running in production, you should upgrade to a full
 Parallels Cloud Server product (see below).
 
 So can this be used with openvz or is it necessary to change to
 parallels software in order to use it - in which case it isn't realy
 for openvz containers”.

Rene, you are right, sorry for a bit misleading subject. It’s technology 
preview only for openvz and requires upgrade to Parallels Cloud Server for 
higher volumes of data stored.
Plz contact me if you are interested in licensing it for OpenVZ.

 
 On Fri, Jan 24, 2014 at 4:04 PM, Kirill Korotaev d...@parallels.com wrote:
 openvz.org and Parallels are pleased to announce availability of Parallels 
 Cloud Storage technology preview for all openvz users!
 
 Parallels Cloud Storage is a new Software Defined Storage solution (also 
 known as virtual SAN) which makes it possible to build a scalable 
 distributed storage for running your Containers on commodity hardware and 
 SATA drives with performance comparable to real HW SAN storage. It provides 
 strong consistency semantics required for running VMs, Containers and iSCSI 
 targets, has built-in automatic data replication and recovery, supports HDDs 
 and nodes hotplug, is highly available and scales to Petabytes.
 
 Virtualization running on top of Parallels Cloud Storage gains multiple 
 advantages like high availability, fast live migration across nodes (w/o 
 storage migration), benefiting from all HDDs performance potential and 
 utilizing otherwise idle HDDs in the cluster, no capacity limitations, grow 
 on demand and so on.
 
 See more details at https://openvz.org/Parallels_Cloud_Storage. Your 
 feedback is very much appreciated!
 
 Thanks,
 OpenVZ  Parallels teams
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Cloud Storage for OpenVZ Containers

2014-01-26 Thread Kirill Korotaev

On 25 Jan 2014, at 01:02, Benjamin Henrion zoo...@gmail.com wrote:

 On Fri, Jan 24, 2014 at 10:04 AM, Kirill Korotaev d...@parallels.com wrote:
 openvz.org and Parallels are pleased to announce availability of Parallels 
 Cloud Storage technology preview for all openvz users!
 
 Parallels Cloud Storage is a new Software Defined Storage solution (also 
 known as virtual SAN) which makes it possible to build a scalable 
 distributed storage for running your Containers on commodity hardware and 
 SATA drives with performance comparable to real HW SAN storage. It provides 
 strong consistency semantics required for running VMs, Containers and iSCSI 
 targets, has built-in automatic data replication and recovery, supports HDDs 
 and nodes hotplug, is highly available and scales to Petabytes.
 
 Virtualization running on top of Parallels Cloud Storage gains multiple 
 advantages like high availability, fast live migration across nodes (w/o 
 storage migration), benefiting from all HDDs performance potential and 
 utilizing otherwise idle HDDs in the cluster, no capacity limitations, grow 
 on demand and so on.
 
 See more details at https://openvz.org/Parallels_Cloud_Storage. Your 
 feedback is very much appreciated!
 
 Can you point us to the source for those binaries:
 
 http://download.openvz.org/pstorage/current/

Benjamin, it’s not open source. at least yet.

 
 Best,
 
 -- 
 Benjamin Henrion bhenrion at ffii.org
 FFII Brussels - +32-484-566109 - +32-2-4148403
 In July 2005, after several failed attempts to legalise software
 patents in Europe, the patent establishment changed its strategy.
 Instead of explicitly seeking to sanction the patentability of
 software, they are now seeking to create a central European patent
 court, which would establish and enforce patentability rules in their
 favor, without any possibility of correction by competing courts or
 democratically elected legislators.
 
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Cloud Storage for OpenVZ Containers

2014-01-24 Thread Kirill Korotaev
openvz.org and Parallels are pleased to announce availability of Parallels 
Cloud Storage technology preview for all openvz users!

Parallels Cloud Storage is a new Software Defined Storage solution (also known 
as virtual SAN) which makes it possible to build a scalable distributed storage 
for running your Containers on commodity hardware and SATA drives with 
performance comparable to real HW SAN storage. It provides strong consistency 
semantics required for running VMs, Containers and iSCSI targets, has built-in 
automatic data replication and recovery, supports HDDs and nodes hotplug, is 
highly available and scales to Petabytes.

Virtualization running on top of Parallels Cloud Storage gains multiple 
advantages like high availability, fast live migration across nodes (w/o 
storage migration), benefiting from all HDDs performance potential and 
utilizing otherwise idle HDDs in the cluster, no capacity limitations, grow on 
demand and so on.

See more details at https://openvz.org/Parallels_Cloud_Storage. Your feedback 
is very much appreciated!

Thanks,
OpenVZ  Parallels teams
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] discard support for SSD in OpenVZ kernel

2013-08-28 Thread Kirill Korotaev
I also want to add that SSD models referred to in  the bug (like OCZ one) are 
not server grade and you guys risk very much loosing your data or corrupting 
file system on power failure.
You should test it heavily.

On Aug 29, 2013, at 03:52 , Kir Kolyshkin k...@openvz.org wrote:

 On 08/28/2013 06:34 AM, spameden wrote:
 
 
 
 2013/8/28 Kir Kolyshkin k...@openvz.org
 On 08/27/2013 08:20 AM, spameden wrote:
 ArchLinux wiki says:
 Warning: Users need to be certain that kernel version 2.6.33 or above is 
 being used AND that their SSD supports TRIM before attempting to mount a 
 partition with the discard flag. Data loss can occur otherwise!
 
 So I guess it's not in the OpenVZ kernel?
 
 I'd like to use TRIM because it increases performance to SSD drastically!
 
 You'd better check it with Red Hat, looking into their RHEL6 documentation.
 
 My quick googling for rhel6 kernel ssd discard shows that rhel6 kernel
 do support trim, they have backported it (as well as tons of other stuff,
 so this is hardly 2.6.32 kernel anymore).
 
 I've just tested via hdparm (ofc it's not a perfect tool to test out disk 
 performance but still), here is what I get on the latest 2.6.32-042stab079.5:
 
 # hdparm -t /dev/mapper/vg-root
 /dev/mapper/vg-root:
  Timing buffered disk reads: 828 MB in  3.00 seconds = 275.56 MB/sec
 
 on standard debian-7 kernel (3.2.0-4-amd64):
 # hdparm -t /dev/mapper/vg-root
 /dev/mapper/vg-root:
  Timing buffered disk reads: 1144 MB in  3.00 seconds = 381.15 MB/sec
 
 and it's only read speed test.
 
 I don't get why it differs so much?
 
 
 My suggestion is, since this functionality is not directly related to OpenVZ, 
 and
 we usually don't change anything in this code (unless there is a reason to), 
 to
 try reproducing it on a stock RHEL6 kernel and, if it is reproducible, file a 
 bug
 to red hat or, if it's not reproducible, file a bug to openvz.
 
 Kir.
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users



smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVSwitch

2012-12-31 Thread Kirill Korotaev
OpenVSwitch module is part of RHEL6 OpenVZ kernel, that's absolutely right.
One just need to configure everything.


On Dec 31, 2012, at 17:52 , LightDot light...@gmail.com
 wrote:

 On Mon, Dec 31, 2012 at 12:12 PM, Mark Olliver
 mark.olli...@thermeon.com wrote:
 Hi Guys,
 
 We are using a mix of OpenVZ and KVM for our guests on our Host Nodes, I 
 have recently been looking at OpenVSwitch as well as using that could 
 seriously help with our security isolation. I have seen in the forums that 
 it looks like OpenVZ does not currently work with the full features of 
 OpenVSwitch which is a shame, but even at a basic level it would be good to 
 use rather than standard bridging. I have attempted to compile it today but 
 found a few compile errors partly linked to RHEL6.3 issues but I have fixed 
 those and still get a few more which I can not get past when building the 
 kernel module against the OpenVZ kernel. I presume this must be due to extra 
 bits in the OpenVZ kernel that do not exist in the standard RHEL kernel.
 
 Can anyone have a go and see if there is a way to build this against the 
 latest kernel.
 I specifically have an issue with the following two definitions which I 
 believe come from datapath/linux/net_namespace.c
 
 rpl_unregister_pernet_gen_device
 rpl_register_pernet_gen_device
 
 Thanks
 
 Mark
 ___
 Users mailing list
 Users@openvz.org
 http://lists.openvz.org/mailman/listinfo/users
 
 I was under the impression that OpenVSwitch is already included in the
 latest RHEL6 openvz kernels, so no extra modules would be needed? Just
 userspace tools?
 
 I saw CONFIG_OPENVSWITCH mentioned in both testing and stable kernel's
 changelogs a while ago... June or July 2012?
 
 Seems my assumption is wrong..?
 
 Regards
 
 ___
 Users mailing list
 Users@openvz.org
 http://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
http://lists.openvz.org/mailman/listinfo/users


Re: [Users] ploop and trim/ discard support

2012-09-17 Thread Kirill Korotaev

On Sep 17, 2012, at 19:53 , Corin Langosch corin.lango...@netskin.com wrote:

 
 On 13.09.2012 at 09:22 +0200, Kirill Korotaev d...@parallels.com wrote:
 
 No, AFAIR we should use TRIM on ext4 and it simply reports unused space.
 Balloon is used for resize via allocating some space and hiding it 
 from user,  but for compacting it's a bit bad since can cause ENOSPC 
 while it's really not...
 
 
 So this whole ballooning is only a work around as trim/ discard support 
 for ext4 is only available in kernel = 2.6.33? Once openvz is rebased 
 to a newer kernel (3.2.x?) it can/ will be dropped? :)
 

You've mixed 2 different scenarios:
1. vzctl set --diskpace
When you resize CT to smaller sizes we do not want to resize live file system 
and move data around causing I/O.
So we use balloon to reserve some space in CT and pretend that CT was made 
smaller.
TRIM has nothing to do with this scenario, cause it wouldn't prevent file 
system from allocating its free space.

2. compacting
When CT has used some space and then files were removed image requires 
compaction to free this space back to host.
This is where both ballooning and TRIMing can help. But ballooning reserves 
disk space, so it can lead to ENOSPC inside CT and thus is better to avoid 
(remember, it reserves space!).
TRIM on the other hand is a standard way to cause file system to report it's 
unused space and this is what we use.

TRIM support present in our RHEL6 kernels, so switching to = 2.6.33 is not 
required and won't result in any benefits in this area.

Thanks,
Kirill



smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] sharing directory between two VE

2012-07-04 Thread Kirill Korotaev
Bind mount from ve mount script solves this problem...

Sent from my iPhone

On 04.07.2012, at 21:05, Olivier Kaloudoff ope...@kalou.net wrote:

 Hi,
 
For the purpose of a migration, I'd need to share a directory between   
 two VE, context 1 and 2. I've tried, with no success:
 
 - mounting /tmp/shared.ext2 from the host to both guests using simfs and -o  
 loop
 - mounting --bind /tmp/shared.dir from the host to both guests using mount  
 --make-shared
 - mounting --bind from the 1st VE to the 2nd
 
 Is there another way to try ... ?
 
 Using vserver, I used to change the context of the files, does OpenVz offer  
 this capability ?
 
 Best Regards,
 
 Olivier
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] vswap question - physpages/oomguarpages

2012-06-20 Thread Kirill Korotaev
I guess it means oom, i.e. out of memory killer had to kill someone since 
vswap+ram was not enough...

Sent from my iPhone

On 21.06.2012, at 8:58, Rene C. ope...@dokbua.com wrote:

 
 I just noticed a couple of containers on one of our  vswap enabled servers 
 have non-zero failcnt's (user_beancounters)
 
uid  resource held  maxheld
   barrierlimit  failcnt
  1413:
 physpages   84873   131166
 0   131072   61
 oomguarpages94628   377952
 0  92233720368547758074
  1409:
 physpages   52986   262155
 0   262144 1378
 oomguarpages57155   376725
 0  9223372036854775807   18
 
 (I've deleted lines with zero failcnt for clarity)
 
 It was my understanding that if a vswap enabled container try to use more 
 physpages than available it would start vswapping - is that what happens 
 here? Otherwise how can physpages fail?  Is this normal swapping behavior or 
 a problem?
 
 A second thing is, when I tried to raise the physpages value with vzctl it 
 again wrote the old block values into the vz conf files, i.e. 
 
 i.e. before running vzctl the values are like this: 
  PHYSPAGES=0:512M
  SWAPPAGES=0:1024M
 
 after running vzctl --set 1413 --save --ram 512M --swap 1G the values are 
 like this:
  PHYSPAGES=0:131072
  SWAPPAGES=0:262144
 
 I already opened a bug report on this and I though it had been fixed already. 
  Is there some undocumented flag that needs to be provided to vzctl to get it 
 to write out the values in the easy human readable format?
 
 Regards,
 Rene
 
 ATT1.c

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] occasional high loadavg without any noticeable cpu/memory/io load

2012-05-22 Thread Kirill Korotaev
Looks like in your case you've hit physpages limit.
In such situations VPS behaves as a standalone machine - it starts to swap out 
(though virtually) and process stuck in D state (swap in / swap out),
which contributes  to loadavg.

So either increase memory limits for your VPS or kill/tune the memory hungry 
workload.

Note: loadavg can also increase due to CPU limits as processes are delayed when 
overuse their CPU.

Thanks,
Kirill


On May 22, 2012, at 14:49 , Rene C. wrote:


Hi Esme,

 Did you check the /proc/user_beancounters of that VPS? Sometime’s a high load 
 could be caused by buffers that are full.

Thanks for the suggestion, much appreciated!

I didn't think of checking at the time I'm afraid.  I suppose since the 
container has not been rebooted since, the beancounters should still show any 
problems encountered at the time right?

Below is the user_beancounters of the problem CT. I notice physpages and 
dcachesize have maxheld values very close to limits (even if failcnt is zero) 
could that have been the cause?


  uid  resource held  maxheld  
barrierlimit  failcnt
1407:  kmemsize252703307   1124626432   
1932525568   21474836480
   lockedpages 0   15   
524288   5242880
   privvmpages893372  5683554  
9223372036854775807  92233720368547758070
   shmpages   23 7399  
9223372036854775807  92233720368547758070
   dummy   00   
 000
   numproc   136  480  
9223372036854775807  92233720368547758070
   physpages  733468  1048591   
 0  10485760
   vmguarpages 00   
 0  92233720368547758070
   oomguarpages   137691   676209   
 0  92233720368547758070
   numtcpsock101  459  
9223372036854775807  92233720368547758070
   numflock7   37  
9223372036854775807  92233720368547758070
   numpty  14  
9223372036854775807  92233720368547758070
   numsiginfo  0   66  
9223372036854775807  92233720368547758070
   tcpsndbuf 4024896 34884168  
9223372036854775807  92233720368547758070
   tcprcvbuf 1654784  7520256  
9223372036854775807  92233720368547758070
   othersockbuf   195136  3887232  
9223372036854775807  92233720368547758070
   dgramrcvbuf 0   155848  
9223372036854775807  92233720368547758070
   numothersock  130  346  
9223372036854775807  92233720368547758070
   dcachesize  222868425   1073741824
965738496   10737418240
   numfile  385312765  
9223372036854775807  92233720368547758070
   dummy   00   
 000
   dummy   00   
 000
   dummy   00   
 000
   numiptent 197  197  
9223372036854775807  92233720368547758070

I'm not that familiar with the nitty-gritties of the beancounters but these are 
the values I have in the 1407.conf file.

PHYSPAGES=0:4096M
SWAPPAGES=0:8192M
KMEMSIZE=1843M:2048M
DCACHESIZE=921M:1024M
LOCKEDPAGES=2048M
PRIVVMPAGES=unlimited
SHMPAGES=unlimited
NUMPROC=unlimited
VMGUARPAGES=0:unlimited
OOMGUARPAGES=0:unlimited
NUMTCPSOCK=unlimited
NUMFLOCK=unlimited
NUMPTY=unlimited
NUMSIGINFO=unlimited
TCPSNDBUF=unlimited
TCPRCVBUF=unlimited
OTHERSOCKBUF=unlimited
DGRAMRCVBUF=unlimited
NUMOTHERSOCK=unlimited
NUMFILE=unlimited
NUMIPTENT=unlimited

When user_beancounters physpage limit is 1048576, with PHYSPAGES set to 4GB, 
then the held value of 733468 should correspond to about 3GB, right?  

Re: [Users] Re: [Announce] Kernel RHEL6 testing 042stab054.1

2012-04-06 Thread Kirill Korotaev
Note, that ploop contains ext4 inode tables also (which are preallocated by 
ext4), so ext4 reserves some space for its own needs.
Simfs however was limiting *pure* file space.

Kirill

On Apr 6, 2012, at 04:58 , jjs - mainphrame wrote:

 However I am seeing an issue with the disk size inside the simfs-based CT. 
 
 In the vz conf files, all 3 CTs have the same diskspace setting:
 
 [root@mrmber ~]# grep -i diskspace /etc/vz/conf/77*conf
 /etc/vz/conf/771.conf:DISKSPACE=2000:2400
 /etc/vz/conf/773.conf:DISKSPACE=2000:2400
 /etc/vz/conf/775.conf:DISKSPACE=2000:2400
 
 But in the actual CTs the one on simfs reports a significantly smaller disk 
 space than it did under previous kernels:
 
 [root@mrmber ~]# for i in `vzlist -1`; do echo $i; vzctl exec $i df; done
 771
 Filesystem   1K-blocks  Used Available Use% Mounted on
 /dev/ploop0p1 23621500939240  21482340   5% /
 none262144 4262140   1% /dev
 773
 Filesystem   1K-blocks  Used Available Use% Mounted on
 /dev/simfs 6216340739656   3918464  16% /
 none262144 4262140   1% /dev
 775
 Filesystem   1K-blocks  Used Available Use% Mounted on
 /dev/ploop1p1 23628616727664  21700952   4% /
 none262144 4262140   1% /dev
 [root@mrmber ~]# 
 
 Looking in dmesg shows this:
 
 [ 2864.563423] CT: 773: started
 [ 2866.203628] device veth773.0 entered promiscuous mode
 [ 2866.203719] br0: port 3(veth773.0) entering learning state
 [ 2868.302300]  ploop1:
 [ 2868.329086] GPT:Primary header thinks Alt. header is not at the end of the 
 disk.
 [ 2868.329099] GPT:4799 != 48001023
 [ 2868.329104] GPT:Alternate GPT header not at the end of the disk.
 [ 2868.329111] GPT:4799 != 48001023
 [ 2868.329115] GPT: Use GNU Parted to correct GPT errors.
 [ 2868.329128]  p1
 [ 2868.333608]  ploop1:
 [ 2868.337235] GPT:Primary header thinks Alt. header is not at the end of the 
 disk.
 [ 2868.337247] GPT:4799 != 48001023
 [ 2868.337252] GPT:Alternate GPT header not at the end of the disk.
 [ 2868.337258] GPT:4799 != 48001023
 [ 2868.337262] GPT: Use GNU Parted to correct GPT errors.
 
 I'm assuming that this disk damage occurred under the buggy stab54.1 kernel. 
 I could destroy the container and create a replacement but I'd like to make 
 believe, for the time being, that it's valuable. Just out of curiosity, what 
 tools exist to fix this sort of thing? The log entries recommend gparted, but 
 I suspect I may not have much luck from inside the CT with that. If this were 
 PVC, there would obviously be more choices. You thoughts?
 
 Joe
 
 On Thu, Apr 5, 2012 at 3:17 PM, jjs - mainphrame j...@mainphrame.com wrote:
 I'm happy to report that stab54.2 fixes the kernel panics I was seeing in 
 stab54.1 - 
 
 Thanks for the serial console reminder, I'll work on setting that up...
 
 Joe
 
 On Thu, Apr 5, 2012 at 3:47 AM, Kir Kolyshkin k...@openvz.org wrote:
 On 04/05/2012 08:48 AM, jjs - mainphrame wrote:
 Kernel stab53.5 was very stable for me under heavy load but with stab54.1 I'm 
 seeing hard lockups - the Alt-Sysrq keys don't work, only the power or reset 
 button will do the trick.
 
 I don't have a serial console set up so I'm not able to capture the kernel 
 panic message and backtrace. I think I'll need to get that set up in order to 
 go any further with this.
 
  054.2 might fix the issue you are having. It is being uploaded at the 
 moment...
 
 Anyway, it's a good idea to have serial console set up. It greatly improves 
 chances to resolve kernel bugs. http://wiki.openvz.org/Remote_console_setup 
 just in case.
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 
 
 ATT1.c


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Re: Hung Tasks on NFS (maybe not a OpenVZ Problem) - How to forcefully kill a container ?

2012-04-02 Thread Kirill Korotaev
Vzctl stop --fast
However it wont't help in case of tasks in D state. You need to mount nfs with 
softintr option for that.

Sent from my iPhone

On 02.04.2012, at 14:22, Aleksandar Ivanisevic aleksan...@ivanisevic.de 
wrote:

 Sirk Johannsen s.johann...@satzmedia.de
 writes:
 
 
 Is there a way to forcefully kill the CT ?
 In this case I don't care if the process remains running.
 I just want the rest of the CT to be stopped so I can start the CT again.
 
 try this:
 
 vzctl chkpnt VEID --kill
 
 don't know where I got it, but it worked for me in a few cases; in
 some others it did not though.
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] ploop

2012-03-29 Thread Kirill Korotaev
It depends on what content is put inside. If VPS has lot's of images/video 
files - near 0% compression is possible.
If lots of text or programs - about 50% can be achieved.

Reality is somewhere in the middle typically.


On Mar 29, 2012, at 13:43 , massimiliano.sciab...@kiiama.com 
massimiliano.sciab...@kiiama.com wrote:

 Hi,
 has anyone tryed to compress the file that simulates the VPS hard disk?
 If so, what's the comressio achieved?
 Thanks
 
 
 On Thu, 29 Mar 2012 12:52:02 +0400, Kir Kolyshkin wrote:
 On 03/28/2012 08:01 PM, Mark Olliver wrote:
 
 Hi,
 
 With ploop is it possible rather than using a file to use and lvm 
 partition as the backend storage?
 
 
 What for? The whole purpose of ploop is to use a file as a storage.
 
 If your question is can a CT use a dedicated LVM partition then the
 answer is yes, and it was quite possible before ploop.
 
 Also plop is it possible for the guest to run it’s own lvm layer, 
 with kvm currently you can assign each VE a kvm partition then as it 
 boots up it runs its own lvm layer where the root partition is stored.
 
 
 My rough guess is yes you can (and again, ploop is not about it).
 
 You can give a CT an access to physical disk or disk partition or LVM
 volume or volume group using vzctl set --devnodes and then manage it
 from the inside. But I haven't tried it, because I don't see any
 practical use for it.
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Some observations from ploop testing

2012-03-24 Thread Kirill Korotaev
Can you please report slabtop output? We've just fixed obe memory leak. Thanks!

Sent from my iPhone

On 24.03.2012, at 21:57, jjs - mainphrame j...@mainphrame.com wrote:

 I've been creating simfs and ploop based containers and exercising them in 
 different ways. While the ploop-based containers are basically working, in my 
 testing a ploop-based CT seems to require more resources than an equivalent 
 simfs-based CT. On my modest 32 bit test rig with 1 GB RAM, I've been running 
 dbench on simfs based CTs and looking at performance with new kernel 
 versions. But when running dbench tests on a ploop based CT with the same 
 resources, it has not been able to finish because the machine runs out of 
 resources, performance slows to a crawl and even host processes are killed 
 off.
 
 I'll try to get some more memory for this machine for further testing.
 
 Regards,
 
 Joe
 ATT1.c

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ on Power?

2012-03-01 Thread Kirill Korotaev
1. It should be pretty easy to make OpenVZ compilable/running on PowerPC. 
Typically it takes a day or so in the worst case since there is almost no code 
depending on platform, except for maybe syscalls and it's numbers.

2. However, checkpoint restart is not supported on PowerPC platform (originally 
we supported x32/x86-64 and IA64 platforms, then dropped IA64). Again, it 
should be pretty straightforward as 95% of state is not platform dependent. 
However, nobody really asked before...

Thanks,
Kirill

On Mar 1, 2012, at 20:03 , Bryson Lee wrote:

 Hi,
  
 I’m looking for an alternative to BLCR to provide checkpoint/restart 
 functionality for a Linux application on IBM Power hardware. Having seen some 
 mentions that OpenVZ supports Power, I wanted to try it out.
  
 I’ve tried to rebuild vzkernel-2.6.32-042stab049.6.src.rpm in Mock on an IBM 
 JS-12 blade (Power6) running Fedora 12, and have run into a number of 
 problems.  I’ll note that we have successfully rebuilt the Fedora12 
 2.6.32-based kernel from SRPM in the same Mock configuration.
  
 The initial issue was that the patch-042stab049 introduced a single line of 
 invalid syntax into arch/powerpc/include/asm/elf.h:
  
 export struct page *vdso32_pages[1];
  
 Correcting “export” to “extern” resulted in a type-redefinition compile 
 error, since vdso32_pages is defined as IIR “static unsigned int” in the PPC 
 vdso.c.
  
 Removing the extern declaration from elf.h entirely, since apparently the 
 symbol usage in the cpt/cpt_mm.h is ifdef’d by CONFIG_X86 revealed another 
 stumbling block with undefined functions [un]charge_beancounter_fast() due to 
 CONFIG_BEANCOUNTERS not getting defined.  I added appropriate no-op 
 definitions to the group already present in the #ifndef CONFIG_BEANCOUNTERS 
 section of kernel/include/bc/beancounters.h, but there appears to be a larger 
 problem in that the contents of config-vz aren’t getting reflected in the 
 final kernel config used during the RPM build.
  
 My basic question is whether or not there’s any hope of successfully 
 generating a ppc64 OpenVZ kernel.  I tried the stable RHEL5 kernel SRPM as 
 well, but encountered a different build failure.
  
 I note that the last e.g. vzctl version that has an RPM download for ppc64 is 
 3.0.26 from 2/27/2011, and that the next minor release 3.0.26.1 from about a 
 week later has no mention of Power at all.   I reviewed the –announce, -user, 
 and –devel list archives from that timeframe, and didn’t see any explicit 
 mention of support for Power being dropped.
  
 Is ppc[64] still a supported architecture for OpenVZ?  If so, is 
 checkpoint/restart available?  How should I go about building a kernel (and, 
 eventually the utilities) for my Fedora12 systems?
  
 Thanks in advance,
  
 -Bryson Lee
 ATT1.c


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] CentOS 6: default inbound traffic limited for CT's

2012-02-20 Thread Kirill Korotaev
Mattias,

1. The same doesn't happen on the very same host system (VE0)? Inside container 
only?
2. Are you using venet or bridged networking?

Thanks,
Kirill

On Feb 21, 2012, at 00:31 , Mattias Geniar wrote:

 Hi,
 
 I'm running OpenVZ on a CentOS 6 x64 machine with a stable kernel
 (2.6.32-042stab049.6) and have even tried the issue below on the
 test-branch (2.6.32-042stab052.2).
 The issue I'm experiencing is also reported by Fredericep on the forum
 but never got any follow-up:
 http://forum.openvz.org/index.php?t=treegoto=44238srch=inbound#msg_44
 238
 
 It's the exact same problem: my hardware node runs perfectly fine, it
 has both full in- and outgoing networking speed. I can consistently
 (3hours+ as tested) download files at my full line speed.
 Whenever I try to same in a container, I can get a quick burst of
 network traffic for a few seconds (10MB/s+) and then fall back to
 10-200Kb/s, it varies.
 
 My first troubleshooting went to incoming traffic shaping for the
 eth0/venet0 interface, but that's not the case:
  # tc -s qdisc ls dev eth0
  qdisc mq 0: root
   Sent 65350399 bytes 363366 pkt (dropped 0, overlimits 0 requeues 0)
   rate 0bit 0pps backlog 0b 0p requeues 0
 
  # tc qdisc del dev eth0 root
  RTNETLINK answers: No such file or directory
 
 But this is a default install and it doesn't have any traffic shaping
 rules active. Neither does it have iptables active. It's all still
 running the default OpenVZ stack.
 
 Second idea was a possible hit of TCPSNDBUF or TCPRCVBUF as the defaults
 (1720320) are rather low. But even changing it to something idiotic like
 9223372036854775807 didn't make a difference. The /proc/user_beancounter
 also didn't report any failed packets.
 
 When tcpdumping the stream, I don't see anything abnormal except that
 it's just slow traffic. Nothing out of the ordinary at first glance.
 
 I'm looking for any advice on how to troubleshoot this, as I believe
 this may very well be a CentOS 6 kernel bug - but to prove that, I would
 of course need to dive deeper which is where my train of thought kind of
 ends.
 
 I look forward to any reply/idea this list may give me.
 
 Mattias Geniar
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] A question about Node RAM

2012-01-07 Thread Kirill Korotaev

On Jan 7, 2012, at 00:19 , Tim Small wrote:

 On 06/01/12 19:35, Quentin MACHU wrote:
 Hello,
 
 Thanks again!
 
 You mean that we should use for exemple this stable kernel : 
 http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab044.11/vzkernel-2.6.32-042stab044.11.i686.rpm
  to get a lot of stability ? By following this little guide : 
 http://wiki.openvz.org/Install_kernel_from_rpm_on_debian.
 
 The apps won't be so disk IO-vore. Tons of VM are for... LAMP / VocalServer 
 / Minecraft  other game servers...
 
 Isn't that putting all your eggs in one basket?  What happens if that 
 machine has a hardware fault?  Personally, I'd perhaps favour going for e.g. 
 4 or 5 Sandy Bridge based machines, each being quad core, and with 32G RAM 
 (maybe something like a Dell R210 II), and use some sort of clustering system 
 (maybe pacemaker with drbd, or glusterfs, or sheepdog) to distribute the 
 storage between the nodes, and allow moving VMs between nodes.

do not recomment gluster or sheepdog - they are nowhere near production 
quality. So speaking about reliability - a described HW with SAS drives RAID is 
by far more reliable.

 May well be cheaper too, but almost certainly more reliable...  Larger 
 numbers of simpler cheaper machines is how Google, Amazon etc. do it - big 
 fat machines like the one you've described are usually trouble in my 
 experience...
 
 Tim.
 -- 
 South East Open Source Solutions Limited
 Registered in England and Wales with company number 06134732.  
 Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
 VAT number: 900 6633 53  
 http://seoss.co.uk/ +44-(0)1273-808309
 ATT1.c


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] A question about Node RAM

2012-01-06 Thread Kirill Korotaev
Sure, it's old information and likely it was about 32bit kernels which are 
limited to 64GB just because CPUs are... :)
64bit kernels are not limited anyhow and OpenVZ is not different in this regard 
from standard Linux.

fixed a couple of places I found with 64GB mentioning:
http://wiki.openvz.org/Different_kernel_flavors_(UP,_SMP,_ENTERPRISE,_ENTNOSPLIT)
http://wiki.openvz.org/FAQ#How_scalable_is_OpenVZ.3F

Thanks,
Kirill

On Jan 6, 2012, at 20:59 , Quentin MACHU wrote:

 Hello,
 
 I've a question for this mailing-list ^^
 
 My enterprise is going to order a 128Gb of RAM server.
 I saw that the OpenVZ Kernel can only support 64Gb.
 
 That's because the wiki isn't up to date ?
 What's about that ?
 How to bypass this limit ? Can we ?
 Recompiling the kernel.. ?
 
 It's important for us =)
 
 Thanks !
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] A question about Node RAM

2012-01-06 Thread Kirill Korotaev
From RAM/CPU perspective this configuration is fine.
But if you plan to run I/O intensive apps you may want to have more HDD drives 
(maybe with less capacity each) to make your raid capable to handle more IOPS.

Kirill

On Jan 6, 2012, at 22:08 , Quentin MACHU wrote:

 Hello,
 
 Thanks for this answer.
 So, we can use 128Gb/256Gb server ? =]
 
 Actually, we're working on Debian 6.
 Do you have any tips on Distro / Kernel ?
 
 Debian 6 + Kernel from Debian repos is really stable ? Debian 5 more maybe ?
 
 We'll have 6*3To HardDrive SAS in RAID 10 to improve I/O
 And Two Opteron 6128 8 cores Magny-Cours 8x 2Ghz.
 
 Do you think it's ok for something like 126 VM with 1Gb of RAM ? =)
 
 Thanks for all :)
 
 
 2012/1/6 Kirill Korotaev d...@parallels.com
 Sure, it's old information and likely it was about 32bit kernels which are 
 limited to 64GB just because CPUs are... :)
 64bit kernels are not limited anyhow and OpenVZ is not different in this 
 regard from standard Linux.
 
 fixed a couple of places I found with 64GB mentioning:
 http://wiki.openvz.org/Different_kernel_flavors_(UP,_SMP,_ENTERPRISE,_ENTNOSPLIT)
 http://wiki.openvz.org/FAQ#How_scalable_is_OpenVZ.3F
 
 Thanks,
 Kirill
 
 On Jan 6, 2012, at 20:59 , Quentin MACHU wrote:
 
  Hello,
 
  I've a question for this mailing-list ^^
 
  My enterprise is going to order a 128Gb of RAM server.
  I saw that the OpenVZ Kernel can only support 64Gb.
 
  That's because the wiki isn't up to date ?
  What's about that ?
  How to bypass this limit ? Can we ?
  Recompiling the kernel.. ?
 
  It's important for us =)
 
  Thanks !
  ___
  Users mailing list
  Users@openvz.org
  https://openvz.org/mailman/listinfo/users
 
 
 
 
 -- 
 Cordialement,
 MACHU Quentin
 
 ATT1.c


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Using a layered filesystem as private dir?

2012-01-05 Thread Kirill Korotaev
As Scott mentioned we have VZFS in commercial version of Parallels Containers.
It helps to save a lot of IOPS by sharing files between containers and is fully 
POSIX compliant.

Thanks,
Kirill


On Jan 5, 2012, at 15:32 , Rick van Rein wrote:

 Hello,
 
 I've just started using OpenVZ, and it feels more natural than the
 alternatives I've seen -- my compliments!
 
 I can get a host runnig from a ZFS volume like /tank/vzdemo, which then
 also gets shown at /var/lib/vz/vz-$VEID.  But what I really want to
 do is use a layered FS (like aufs) as the private directory for the
 container.  But trying to do that leads to an error:
 
 bash# mount -t aufs -o br:/tank/vzdemo=rw:/tank/squeeze=ro none /mnt
 bash# grep VE_ /etc/vz/conf/777.conf 
 VE_PRIVATE=/mnt
 bash# vzctl create 777
 Private area already exists in /mnt
 Creation of container private area failed
 
 What is this trying to say?  Is there a way to do what I am trying
 to do?  Did I understand well that the private area is a directory,
 not a device?
 
 
 Thanks,
 -Rick
 
 
 P.S. To capture any why questions :- I am trying to share as many
 resources as possible.  Containers beat Xen/KVM/VMware in that
 respect, and when I can share the base OS and only have a thin
 layer on top, it should mean that even the buffer cache is
 shared between containers.  It also means that upgrades can be
 reduced to a minimum of repetition.
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] What is OpenVZ container scheduling granularity

2011-12-21 Thread Kirill Korotaev
It's floating, depends on priorities. Plus more important for latency is not 
granularity, but preemptiveness.

Sent from my iPhone

On 21.12.2011, at 0:34, shule ney neysh...@gmail.com wrote:

 Hi all:
 I'm eager to know what is OpenVZ container scheduling granularity, 1ms or 
 something??? I really need information about this.
 ATT1.c

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] What is OpenVZ container scheduling granularity

2011-12-21 Thread Kirill Korotaev
if CPU has nothing to do since your app went to sleep (even for 1us), it will 
be rescheduled to another CPU. Just like for conventional tasks in Linux.


On Dec 21, 2011, at 19:50 , shule ney wrote:

 Much thanks Kirill, I really appreciate your reply! My question is: 
  Suppose two containers exist on my machine which can use 0%-100% CPU, each 
 of them has only one active process. If I sleep one container's process for 
 1us which makes this container has nothing to do, will the the container be 
 scheduled off and the other container gets scheduled? Is 1us too small for 
 container scheduling?? I want to know if this case is possible. Thanks very 
 much. 
 
 2011/12/21 Kirill Korotaev d...@parallels.com
 It's floating, depends on priorities. Plus more important for latency is not 
 granularity, but preemptiveness.
 
 Sent from my iPhone
 
 On 21.12.2011, at 0:34, shule ney neysh...@gmail.com wrote:
 
  Hi all:
  I'm eager to know what is OpenVZ container scheduling granularity, 1ms or 
  something??? I really need information about this.
  ATT1.c
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Heavy Disk IO from a single VM can block the other VMs on the same host

2011-12-01 Thread Kirill Korotaev
That's most likely due to a single file system used for containers - journal 
becomes a bottleneck.
fsync forces journal flushes and other workloads begin to wait for journal... 
In reality workload looks like this are typical for
heavy loaded databases or mail systems only.

How to improve:
- increase journal size
- split file systems, i.e. run each container from it's own file system

Thanks,
Kirill


On Nov 29, 2011, at 20:13 , Hubert Krause wrote:

 Hello,
 
 my environment is a Debian squeeze host with a few debian squeeze
 guests. The private and root filesystems of the guest are locatet on
 the same raid device (raid5) in an luksCrypt Container in an LVM
 container on an ext4 partition with nodelalloc as mountoption. If I run
 the tool stress:
 
 stress --io 5 --hdd 5 --timeout 60s (which means fork 5 threads doing
 read/write access and 5 threads doing constantly fsync) the
 responsivness of the other VMs is very bad. That means, Isolation for
 IO operations is not given. I've tried to reduce the impact of the
 VM with 'vzctl set VID --ioprio=0'. There was only a
 minor effect, my application on the other VM where still not
 responsive.
 
 Any Idea how to prevent a single VM to disturb the other VMs regarding
 diskIO?
 
 Greetings
 
 Hubert
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] OS/app in the OpenVZ container

2011-11-10 Thread Kirill Korotaev
Yes, it's possible to run a single app in a container.
The easiest way is to let the startup scripts setup /proc, /sysfs and the rest 
of environment and then specify in inittab or rc.X what to run on this 
particular runlevel.

However, for media player you may also need to run some of X parts, not just a 
media player alone.

Thanks,
Kirill



On Nov 10, 2011, at 11:29 , Tommy wrote:

 Hi All,
 
 I'm doing some research on processing virtual machine recently.
 
 As I what know now, I think OpenVZ vps runs the same OS kernel as the host 
 system.
 The vps is a group of processes which are forked on the host system.
 In the OpenVZ container, thera are some processes to run an os.
 Is it possible to run only one App in the container instead of the os based 
 on the host os? such as a media player?
 what are work should I need to do mainly?
 could you give me some suggestion?
 
 Thanks.
 
 Tommy
 -- 
 Yours Sincerely!
 
 
 
 ATT1.c


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] bug or feature?: ps -el on HN shows all processes, incl. those of VEs

2011-11-07 Thread Kirill Korotaev
http://wiki.openvz.org/Processes_scope_and_visibility
Plus, as far as I remember there was a patch somewhere on download.openvz.org 
or sysctl which allows to hide non-root processes from root VE.


On Nov 7, 2011, at 13:35 , lst_ho...@kwsoft.de lst_ho...@kwsoft.de wrote:

 Zitat von U.Mutlu for-gm...@mutluit.com:
 
 ps -el (and also ps aux etc.) on the HN shows all processes,
 incl. those of VEs.
 Is there a way to show, on the HN, only the processes of the HN itself,
 excluding the processes of the VEs?
 
 This is as far as i know by design. The HN is the Hypervisor and must  
 have a global view what is going on the machine. That's why it is  
 advised to not use any other services beside openvz on the HN.
 
 Regards
 
 Andreas
 
 
 
 smime.p7sATT1.c


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] bug or feature?: ps -el on HN shows all processes, incl. those of VEs

2011-11-07 Thread Kirill Korotaev

On Nov 7, 2011, at 14:58 , Anatoly Pugachev wrote:

 On Mon, Nov 7, 2011 at 2:00 PM, Kirill Korotaev d...@parallels.com wrote:
 http://wiki.openvz.org/Processes_scope_and_visibility
 Plus, as far as I remember there was a patch somewhere on 
 download.openvz.org or sysctl which allows to hide non-root processes from 
 root VE.
 
 
 On Nov 7, 2011, at 13:35 , lst_ho...@kwsoft.de lst_ho...@kwsoft.de wrote:
 
 Zitat von U.Mutlu for-gm...@mutluit.com:
 
 ps -el (and also ps aux etc.) on the HN shows all processes,
 incl. those of VEs.
 Is there a way to show, on the HN, only the processes of the HN itself,
 excluding the processes of the VEs?
 
 This is as far as i know by design. The HN is the Hypervisor and must
 have a global view what is going on the machine. That's why it is
 advised to not use any other services beside openvz on the HN.
 
 I know I'm a bit offtopic here, but taking in example solaris 10 with
 it's zones, it's possible to supply
 ps with -Z command , which will say what zone/container process
 belongs to. Would be nice,
 to somehow label container processes with container ID in kernel and
 have userland (ps for example, or any other tool)
 to be able to show this label.

There are tools vzps and vztop which does that.
vzps ax -E VEID
will show tasks from specified VE only (and for -E 0 it will show host system 
tasks only filtering out containers).

Thanks,
Kirill


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] How do I know which IP packet is for which container in kernel space

2011-04-06 Thread Kirill Korotaev
Check venet_xmit() and veth_xmit().
These functions are called on boundaries host/VE and have context for skb in 
hands.


On Apr 6, 2011, at 18:55 , niu xinli wrote:

 Hi,
   I need to build a virtual network to test our program. We used Xen before 
 migrating them to openvz. There are some patches I should make to the kernel.
   In net/ipv4/ip_output.c, before the packet is sent out, we need to do some 
 operations on the packet. The problem is , I don't know the packet belongs to 
 which openvz node. It would be easy to distinguish them in user space but we 
 need to know them in kernel space. 
   Is there some data structure in kernel space containing this kind of 
 information. Please give me some suggestions! Thanks a lot!
 ATT1..c


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Re: GPL

2010-06-23 Thread Kirill Korotaev
GPLv2.
See sources and RPMs please.

On Jun 23, 2010, at 14:04 , Nirmal Guhan wrote:

 Hi,
 
 Please let me know on this.
 
 Thanks,
 Nirmal
 
 On Mon, Jun 21, 2010 at 4:50 PM, Nirmal Guhan vavat...@gmail.com wrote:
 Hi,
 
 Please let me know the GPL version the kernel patch and tools are
 licensed under. Is that GPLv2 or 3 ?
 
 Thanks,
 
 Nirmal
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Container creation hangs forever

2010-06-23 Thread Kirill Korotaev
The answer should lie in these messages:
 Unable to get full ostemplate name for centos-5-x86
 Warning: distribution not specified default used /etc/vz/dists/default

which says that OS distribution wasn't detected for some reason correctly on 
container creation and thus on start default network configuration setup was 
used (which didn't work I guess).
Check your template and vzctl installation.


On Jun 23, 2010, at 14:08 , Nirmal Guhan wrote:

 Hi there,
 
 Any clue on this? Am stuck here for few days now :(
 
 --Nirmal
 
 On Mon, Jun 21, 2010 at 2:18 PM, Nirmal Guhan vavat...@gmail.com wrote:
 I don't think there is any kernel crash. I ran gdb and this is where
 it waits for ever. Any other command (from other sessions) too hang
 after this point :(
 
 (gdb) run enter 50
 Starting program: /usr/sbin/vzctl enter 50
 Container is not running
 
 Program exited with code 037.
 Missing separate debuginfos, use: debuginfo-install vzctl-3.0.23-1.i386
 (gdb) run start 50
 Starting program: /usr/sbin/vzctl start 50
 Warning: distribution not specified default used /etc/vz/dists/default
 Starting container ...
 Detaching after fork from child process 1263.
 Initializing quota ...
 Detaching after fork from child process 1264.
 Detaching after fork from child process 1265.
 Container is mounted
 Detaching after fork from child process 1266.
 ^C
 Program received signal SIGINT, Interrupt.
 0x00263832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
 (gdb) bt
 #0  0x00263832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
 #1  0x00350003 in __read_nocancel () from /lib/libc.so.6
 #2  0x4f60c1ca in vz_env_create (h=0x8051e48, veid=50, res=0x8051018,
wait_p=0xbfffeeb8, err_p=0xbfffeeb0, fn=0, data=0xfe00) at env.c:487
 #3  0x4f60cf0f in vps_start_custom (h=0x8051e48, veid=50, param=0x8051008,
skip=SKIP_NONE, mod=0xfe00, fn=0xfe00, data=0xfe00)
at env.c:600
 #4  0x4f60d4a5 in vps_start (h=0xfe00, veid=4294966784, param=0xfe00,
skip=4294966784, mod=0xfe00) at env.c:682
 #5  0x0804c58a in run_action ()
 #6  0x0804d26b in main ()
 (gdb) c
 Continuing.
 ^C
 Program received signal SIGINT, Interrupt.
 0x00263832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
 (gdb) bt
 #0  0x00263832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
 #1  0x00350003 in __read_nocancel () from /lib/libc.so.6
 #2  0x4f60c1ca in vz_env_create (h=0x8051e48, veid=50, res=0x8051018,
wait_p=0xbfffeeb8, err_p=0xbfffeeb0, fn=0, data=0xfe00) at env.c:487
 #3  0x4f60cf0f in vps_start_custom (h=0x8051e48, veid=50, param=0x8051008,
skip=SKIP_NONE, mod=0xfe00, fn=0xfe00, data=0xfe00)
at env.c:600
 #4  0x4f60d4a5 in vps_start (h=0xfe00, veid=4294966784, param=0xfe00,
skip=4294966784, mod=0xfe00) at env.c:682
 #5  0x0804c58a in run_action ()
 #6  0x0804d26b in main ()
 (gdb)
 
 --Nirmal
 
 On Sun, Jun 20, 2010 at 10:03 AM, Enrico Weigelt weig...@metux.de wrote:
 * Nirmal Guhan vavat...@gmail.com wrote:
 Hi,
 
 I followed the steps in
 http://wiki.openvz.org/OS_template_cache_preparation to create the
 template. I used
 
 Alternative: use precreated template cache
 
 to copy the template to /vz/template/cache and then
 
 vzctl create 107 --ostemplate centos-5-x86
 Unable to get full ostemplate name for centos-5-x86
 Creating container private area (centos-5-x86)
 
 It then hangs here for ever (I waited almost 2 hrs and have a faster
 internet connection too).
 
 Any sign of kernel crash in syslog ?
 
 
 cu
 --
 -
  Enrico Weigelt==   metux IT service - http://www.metux.de/
 -
  Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
  Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
 -
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Re: MIPS support

2010-06-23 Thread Kirill Korotaev
It's quite easy to support (especially if RHEL kernels support it).
Porting to SPARC take less then a week or so. MIPS should be easier.

On Jun 23, 2010, at 14:05 , Nirmal Guhan wrote:

 Hi,
 Please let me know on this.
 --Nirmal
 
 On Mon, Jun 21, 2010 at 4:14 PM, Nirmal Guhan vavat...@gmail.com wrote:
 Hi,
 
 I understand there is a OpenVZ support for PPC (32  64 ?) but just
 wondering if MIPS is also supported. If so, are both 32 and 64 bit
 supported?
 
 --Nirmal
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] experience with GFS2 and OpenVZ

2010-03-05 Thread Kirill Korotaev
Our experience with GFS2 was no success and it missed a lot of functionality a 
year ago or so.
e.g. it even didn't save file mode correctly and was losing +x bit :) 
(obviously,
no one ever tried to run executables from it). So do not recommend to use it 
until
it's officially released in RHEL6 (or later products) and stabilized.

Thanks,
Kirill

On Mar 5, 2010, at 14:31 , frank wrote:

 Hi,
 we have two Red Hat 5.4 nodes, sharing a GFS filesystem, and we are 
 wondering to migrate to GFS2 in order to get a better performance. We 
 have not migrated yet because we are worried about the stability, and it 
 is supposed that GFS is better on this.
 
 We would like to know about other openvz user experiences with GFS2 to 
 decide to stay with GFS for a while, or to migrate to GFS2.
 
 Thanks in advance.
 
 Frank
 
 
 -- 
 Aquest missatge ha estat analitzat per MailScanner
 a la cerca de virus i d'altres continguts perillosos,
 i es considera que està net.
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] RE: Trouble getting the 4G (a.k.a. hugemem) patch to work.

2009-04-09 Thread Kirill Korotaev
default OpenVZ kernel is built with 3GB / 4GB split, not 4/4. for
compatibility with some old java versions.
So user space is limited to 3GB as usually.

On 4/9/09 4:30 PM, Edward Hibbert edward.hibb...@metaswitch.com wrote:

 One other bit of info; I'm fairly sure that the 4G patch is built in to the
 kernel, because I can see this during the boot sequence:
  
 mapped 4G/4G trampoline to fff6d000.
 So that leaves me baffled as to why I can't get a process above 3G.
 
 
 From: Edward Hibbert
 Sent: 09 April 2009 11:41
 To: 'users@openvz.org'
 Subject: Trouble getting the 4G (a.k.a. hugemem) patch to work.
 
 Apologies if this is a dumb FAQ.
  
 I'm trying to get a kernel with what's known in the RedHat world as hugemem,
 i.e. 4G user space.  I've downloaded a -ent version of the 2.6.18 kernel, and
 also tried building one myself, but neither of these seems to allow me to
 exceed 3G.  
  
 The source code does seem to have 4G macros in it, and I find it hard to
 believe that the patch simply doesn't work - so I'm more inclined to think I'm
 doing something stupid.
  
 Any suggestions?  Yes, yes, I know we should move to 64-bit, but that's not an
 option right now.
  
 Regards,
  
 Edward.


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] openvz disk access performance

2009-01-19 Thread Kirill Korotaev
This is kinda strange comparison...

1. *testing* (not production, not full, not ever stabilized/optimized)
branch of OpenVZ based on 2.6.22 kernel was selected for benchmarking, which
is obviously bad idea. This branch was used purely for mainstream
integration... I wonder why author selected it?

2. compilation is measured unparalleled, i.e. without -j option, so hardware
is not fully utilized. It is much more interesting to measure parallelized
compilation on which VM solutions (including Xen) behave much worse. For
example, on 2x Quad Core 3.0Ghz, 16Gb RAM system 8vCPU container does
compilation on 78% faster then 8vCPU VM, i.e. almost twice.

3. Why paper doesn't present variation of the results (RMS)?
It's simply wrong to compare any 2 values when you don't know RMS...

4. Disk performance varies very much across the disk surface (up to 2! times
at the beginning and at the end of the partition). This is not taken into
account AFAICS and obviously may result in any values measured...

Kirill


On 1/19/09 10:49 PM, openvz@jks.tupari.net openvz@jks.tupari.net
wrote:

 According to
 http://www.scribd.com/doc/4916478/comparison-of-open-source-virtualization-tec
 hnology
 openvz has good network performance, but bad disk access performance.  Has
 anything changed in the 4 months since that was posted?
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] openvz disk access performance

2009-01-19 Thread Kirill Korotaev
If someone is interested in doing a 3rd party comparison I can help with the
correct methodology creation taking into account lots of obstacles such as
disk non-uniform performance, not always real time ticking inside VMs and so
on. It's really really hard to make an apples to apples comparison :(

Kirill

On 1/19/09 11:23 PM, Michael H. Warfield m...@wittsend.com wrote:

 On Mon, 2009-01-19 at 14:49 -0500, openvz@jks.tupari.net wrote:
 According to
 http://www.scribd.com/doc/4916478/comparison-of-open-source-virtualization-te
 chnology
 openvz has good network performance, but bad disk access performance.  Has
 anything changed in the 4 months since that was posted?
 
 Wow...  Could they possibly chosen a more inconvenient format.  A
 slide
 show in pdf fed through flash.  That was painful.  Can't even download
 the pdf to just page through it without signing up for an account.  Man
 that sucks.
 
 I'd like to see some independent validation of those numbers and the
 methodology.  Some things in there don't seem to pass the smell test
 (particularly wrt disk times) and I wonder how well managed things like
 file system positioning (location within a disk) and fragmentation were
 managed and controlled.  I would like to know if they ran each test from
 a controlled partition (so the location on disk didn't vary) and rebuilt
 the file system each time (to manage fragmentation).  But, even the dd
 from /dev/zero to /dev/null seems rather wonky to me.
 
 I also find it hard to believe, just from personal experience, that
 Xen
 would beat OpenVZ for anything.  I've run Xen and I have OpenVZ in
 production.  I've got a couple dozen OpenVZ VM's running on a single
 platform with virtually no major load average problem (400+ processes at
 any one time) where VMware crushed the processor at less than a dozen
 and Xen couldn't even keep up with that (no HW virtualization).
 
 They also show Xen outperforming VirtualBox (I would have loved to see
 a VMware comparison in there as well) but that is totally contrary to my
 experience both with an without HW virtualization (but I noticed they
 were using HW virt for Xen and had it disabled for VirtualBox for at
 least some of the tests...  Hmmm...).
 
 I have first hand hands on experience with VMware, VirtualBox, Xen
 (with and without HW vt), OpenVZ, Linux-Vservers, and kvm.  Their
 results are too at odds with my experience.
 
 Mike
 --
 Michael H. Warfield (AI4NB) | (770) 985-6132 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0xDF1DD471| possible worlds.  A pessimist is sure of it!
 


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] openvz IO bench

2008-08-31 Thread Kirill Korotaev
disk performance is very hard to benchmark in a fair manner since it very
much depends on where physical blocks are allocated by a file system.
FYI, disk speed can vary 2x times at beginning and end of partition.

Thanks,
Kirill



On 8/31/08 2:31 PM, Zhaohui Wang [EMAIL PROTECTED] wrote:




 Hi all

 Have you ever tried benchmark the IO overhead in openvz? I did 2 set of test,
 one is 20 times wget a 7.4 file from a remote websever in a network link
 without interference of other links; the other is cp the 7.4g file for 10
 times. The wget test showed 4% IO overhead in openvz 2.6.24 kernel comparing
 to vanilla,while the cp test showed a  -7% overhead.

 How come cp operation is more faster in openvz kernel than vanilla kernel?Any
 expert can offer a explaination?

 Many thanks.


 Best Regards
 Zhaohui Wang


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
 Behalf Of Adeel Nazir
 Sent: Saturday, August 30, 2008 5:34 PM
 To: users@openvz.org; [EMAIL PROTECTED]
 Subject: [Users] OpenVZ  Linux-rt

 Has anyone tried mixing Ingo Milnar's real-time patchset with the
 OpenVZ patches? Or if anyone has any ideas if there'd be conflicting
 issues between the 2 patchset's? I tried trivially applying the patches
 from both projects and there were a lot of places where the patching
 failed, requiring a manual review, leading me to ask for some details
 on what the openvz patchset really changes from the stock kernel?

 Thanks


 Adeel



 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users

 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] DRBD version?

2008-06-18 Thread Kirill Korotaev
AFAIK, 8.0 is a stable branch. though sure update to 8.0.12 is required long 
ago...
But in reality, I was simply asked to include exactly this versions by the 
developers
working with DRBD.
If they believe it must/can be updated to 8.2 branch - why not?

Kirill

Gregor Mosheh wrote:
 Why does OpenVZ (for Fedora anyway) come with such an old version of
 DRBD (8.0.7) instead of the newer 8.2 series?
 
 Is there any reason I shouldn't or couldn't upgrade DRBD?
 
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: AW: [Users] Veth mac generation

2008-06-12 Thread Kirill Korotaev
Do I understand correctly that you actually experience the following problem:
1. veth MAC address is lower then your ethX MAC.
2. so brX is assigned min(vethX-MAC, ethX-MAC) which is vethX-MAC.
3. and what is the your problem with that? that host system MAC changes 
dynamically and networking breaks or what?

I just can't see how fully random 6 bytes MAC can help. Because sometimes it 
will be low enough as well
and you will hit the problem anyway.

If I got your problem right then I can advise you a possible solution - in 
RHEL5 kernel we have a functionality called
via_phys_dev (triggered by BRCTL_SET_VIA_ORIG_DEV ioctl). This forces kernel 
to work with original
interface ethX (first added to bridge) and pass the traffic to it. This allows 
to add ethX to bridge w/o need
to propogate it's netfilter rules and other settings to brX.

Thanks,
Kirill


Dietmar Maurer wrote:
 Why I asked is because of that bridge problem:
 
 http://forum.openvz.org/index.php?t=msgth=5291#msg_26576
 
 A bridge always select the lowest mac address.
 
 This patch solves the problem, but i am not sure if there are side effects.
 
 https://lists.linux-foundation.org/pipermail/bridge/2008-June/005895.html
  
 The SWSOFT OID is quite 'low', so the problem occurs frequently.
 
 - Dietmar
 
 
 -Ursprüngliche Nachricht-
 Von: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] Im Auftrag von Kirill Korotaev
 Gesendet: Mittwoch, 11. Juni 2008 15:47
 An: users@openvz.org
 Betreff: Re: [Users] Veth mac generation

 and yes and no.
 These upper 3 bytes are reserved for our company, so 
 selecting them you will never conflict with other devices in 
 network infrastructure.
 i.e. the worst what can happen 2 veths will conflict.

 On the other hand - you are right, 6 bytes are better :)
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Veth mac generation

2008-06-11 Thread Kirill Korotaev
and yes and no.
These upper 3 bytes are reserved for our company, so selecting them you
will never conflict with other devices in network infrastructure.
i.e. the worst what can happen 2 veths will conflict.

On the other hand - you are right, 6 bytes are better :)

Kirill

Dietmar Maurer wrote:
 Hi all,
 
 The code to generate mac addresses for veth (generate_mac in veth.c)
 uses the
 Constant SW_OUI for upper 3 bytes, and random values for lower 3 bytes.
 Thus 
 giving 2^24 possible values.
 
 Isn't it better to use random numbers for all 6 bytes, like the code 
 in the linux kernel:
 
 static inline void random_ether_addr(u8 *addr)
 {
 get_random_bytes (addr, ETH_ALEN);
 addr [0] = 0xfe;   /* clear multicast bit */
 addr [0] |= 0x02;   /* set local assignment bit (IEEE802) */
 }
 
 That would make conflict less likely.
 
 - Dietmar
 
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] 32bit vztools and 64bit kernel

2008-04-21 Thread Kirill Korotaev
yes. it's known incompatibility. plz use 64bit tools with 64bit kernel.
there was a bug, but it's low priority, since not critical/major one.


Alexander GQ Gerasiov wrote:
 Hello again.
 
 I'm experimenting with openvz and found one more issue.
 I'm using 32bit system Debian Lenny i386. Kernel compiled for i686
 processor with vz-tools for i386 arch works ok, but when I tried to
 boot into x86_64 kernel (with the same 32bit userspace) vztools said
 strange things:
 
 [EMAIL PROTECTED]:~# ls /var/lib/vzquota/
 [EMAIL PROTECTED]:~# vzctl --verbose start 1001
 Starting VE ...
 Initializing quota ...
 VE is mounted
 Error: kernel does not support user resources. Please, rebuild with
 CONFIG_USER_RESOURCE=y
 VE start failed
 Received signal:  11 in /usr/sbin/vzquota
 Error: Unable to apply new quota values quota not running
 Received signal:  11 in /usr/sbin/vzquota
 VE is unmounted
 
 If I turn off vz's disk quotas I see
 
 [EMAIL PROTECTED]:~# vzctl --verbose start 1001
 Starting VE ...
 Initializing quota ...
 VE is mounted
 Error: kernel does not support user resources. Please, rebuild with
 CONFIG_USER_RESOURCE=y
 VE start failed
 VE is unmounted
 
 
 So I see 3 problems here:
 1st. vzquota segfaults. Not really bad, but looks dirty a bit.
 2nd. vzctl says something about CONFIG_USER_RESOURCE which is no longer
 used as I can see (It's absent in any Kconfig, but still used in tcp
 code patched with openvz patch)
 3d. Finally 32bit userspace doesn't work with 64bit kernel. That's
 terrible %)
 
 Am I right? How should I submit it to bugzilla? 3 different bugs or a
 single one?
 
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] fake swap != 0 in VE?

2008-02-12 Thread Kirill Korotaev
the only way I'm aware of is /proc/meminfo
just patch kernel to print something non-zero there always and your application 
most likely will become happy.

Thomas Sattler wrote:
 [...] while you can set up a VE such that issuing 'free'
 will show both RAM and swap- it will be that of the HN.
 
 That might be all that I need. I can't tell how
 the application checks for swap. Could someone
 give me a hint how an application (like 'free')
 can see any swap?
 
 I'm quite sure that I do not *need* the swap as
 there is plenty of RAM. But the application com-
 plains if there is no swap to be seen.
 
 Thomas
 
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] problem with snmpd on veth

2008-02-11 Thread Kirill Korotaev
you are welcome!

Kirill

Alexander Prinsier wrote:
 It's not a VLAN. I don't remember giving it a name myself. I thought
 openvz somehow choose the name (since it's for VE 3003).
 
 I edited the host_ifname in /etc/vz/conf/3003.conf to veth3003 instead
 of veth3003.0 and it fixed it. Thanks for helping finding the error.
 
 Alexander
 
 Kirill Korotaev wrote:
 is it VLAN or you just gave veth that name?
 I guess snpmd tries to be too smart and assumes it is a VLAN... But maybe 
 I'm wrong.


 Alexander Prinsier wrote:
 I'm running snmpd/strace in the host.

 ip a l lists the interface 'veth3003.0'.

 Could this '.0' make a difference?

 Alexander

 Kirill Korotaev wrote:
 this ioctl() return ifindex by device name.
 where do you run this snpmd/strace? in host or in VE?
 obviously, there is no veth3003 device found, so the error was returned.

 what 'ip a l' commands shows when this happens?

 Kirill

 Alexander Prinsier wrote:
 Hello,

 I'm seeing these error messages appear in my error log:

 Feb 10 14:37:04 h01 snmpd[18514]: ioctl 35123 returned -1

 I tried to get the cause of this error message:

 strace /usr/sbin/snmpd -Le -f -u snmp -I -smux -p /var/run/snmpd.pid
 127.0.0.1  /root/snmpd.log 21

 I could find this in the strace.log:

 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 10
 ioctl(10, SIOCGIFINDEX, {ifr_name=eth0, ifr_index=6}) = 0
 close(10)   = 0
 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 10
 ioctl(10, SIOCGIFINDEX, {ifr_name=veth3003, ???}) = -1 ENODEV (No such
 device)
 write(2, ioctl 35123 returned -1\n, 24ioctl 35123 returned -1
 ) = 24
 close(10)   = 0

 veth3003 is associated with VE 3003, in which runs openvpn (don't think
 that's relevant though).

 Anyone knows what this ioctl is for, and why it fails for veth3003?

 Thanks,

 Alexander
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users

 
 
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] problem with snmpd on veth

2008-02-11 Thread Kirill Korotaev
this ioctl() return ifindex by device name.
where do you run this snpmd/strace? in host or in VE?
obviously, there is no veth3003 device found, so the error was returned.

what 'ip a l' commands shows when this happens?

Kirill

Alexander Prinsier wrote:
 Hello,
 
 I'm seeing these error messages appear in my error log:
 
 Feb 10 14:37:04 h01 snmpd[18514]: ioctl 35123 returned -1
 
 I tried to get the cause of this error message:
 
 strace /usr/sbin/snmpd -Le -f -u snmp -I -smux -p /var/run/snmpd.pid
 127.0.0.1  /root/snmpd.log 21
 
 I could find this in the strace.log:
 
 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 10
 ioctl(10, SIOCGIFINDEX, {ifr_name=eth0, ifr_index=6}) = 0
 close(10)   = 0
 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 10
 ioctl(10, SIOCGIFINDEX, {ifr_name=veth3003, ???}) = -1 ENODEV (No such
 device)
 write(2, ioctl 35123 returned -1\n, 24ioctl 35123 returned -1
 ) = 24
 close(10)   = 0
 
 veth3003 is associated with VE 3003, in which runs openvpn (don't think
 that's relevant though).
 
 Anyone knows what this ioctl is for, and why it fails for veth3003?
 
 Thanks,
 
 Alexander
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: AW: AW: AW: [Users] openvz and qemu

2008-02-11 Thread Kirill Korotaev
care to create a patch?


Dietmar Maurer wrote:
 node should be deleted normally only when no processes left.
 it's possible to fix syscall to return an error when node is 
 non-empty...
 
 Ok, will try make sure that no processes left. But I thing the syscall
 should be fixed too.
 
 - Dietmar
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: AW: AW: [Users] openvz and qemu

2008-02-09 Thread Kirill Korotaev
node should be deleted normally only when no processes left.
it's possible to fix syscall to return an error when node is non-empty...


Dietmar Maurer wrote:
  
 check vzctl sources. it calls other OVZ-specific syscalls by 
 their numbers, like sys_setluid, sys_setublimit etc.

 For creating fairsched node you'll need to call 
 fairsched_mknod first, then move the process to this node 
 using fairsched_mvpr
 
 OK, I wrote a small (perl) script to test those syscalls, and everything
 work as expected, for example:
 
 ..
 fairsched_mknod (0, 500, 400);
 fairsched_vcpus (400, 1);
 set_cpulimit (400, 25);
 fairsched_mvpr ($$, 400);
 fork_and_do_something(); # runs with 25% cpu
 wait
 fairsched_rmnod (400);
 ..
 
 work perfectly.
 
 But when I call fairsched_rmnod(400) from another process while there is
 still a process running inside 400 the system freezes. I get a kernel
 panic somewhere inside move_task_off_dead_cpu.
 
 any idea how to fix that?
 
 - Dietmar
 
 
 
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Problem with TCP window too large for TCPRCVBUF still present

2008-02-05 Thread Kirill Korotaev
try this one attached plz.

Marcin Owsiany wrote:
 Hi Vitaliy,
 
 [ Cc'ing to Kirill, who sent the original patch ]
 
 On Tue, Jan 08, 2008 at 11:12:49AM +0300, Vitaliy Gusev wrote:
 On 11 October 2007 21:46:05 Marcin Owsiany wrote:
 Trying to debug a problem with stalling connections I found this thread:
 http://forum.openvz.org/index.php?t=msggoto=9018
 which describes the exact problems I'm still having with 2.6.18-028.18

 Has it been fixed in this version? Or do you still need someone to test
 this patch?
 Yes, this patch still requires testing.
 
 Can I have the patch as an attachment? The forum mangled the formatting
 and patch refuses to apply it..
 
--- ./diff.ve   2008-02-05 11:25:37.0 +0300
+++ ./diff  2008-02-05 11:32:56.0 +0300
@@ -1,131 +0,0 @@
-diff --git a/include/net/tcp.h b/include/net/tcp.h
-index 0ff49a5..7e8f200 100644
 a/include/net/tcp.h
-+++ b/include/net/tcp.h
-@@ -1815,8 +1815,9 @@ static inline int tcp_win_from_space(int
-/* Note: caller must be prepared to deal with negative returns */
-static inline int tcp_space(const struct sock *sk)
-{
-- return tcp_win_from_space(sk-sk_rcvbuf -
-- atomic_read(sk-sk_rmem_alloc));
-+ int ub_tcp_rcvbuf = (int) sock_bc(sk)-ub_tcp_rcvbuf;
-+ return tcp_win_from_space(min(sk-sk_rcvbuf, ub_tcp_rcvbuf)
-+ - atomic_read(sk-sk_rmem_alloc));
-}
-
-static inline int tcp_full_space(const struct sock *sk)
-diff --git a/include/ub/beancounter.h b/include/ub/beancounter.h
-index 3d87afa..fc236e8 100644
 a/include/ub/beancounter.h
-+++ b/include/ub/beancounter.h
-@@ -144,6 +144,8 @@ struct sock_private {
-unsigned long ubp_rmem_thres;
-unsigned long ubp_wmem_pressure;
-unsigned long ubp_maxadvmss;
-+ /* Total size of all advertised receive windows for all tcp sockets */
-+ unsigned long ubp_rcv_wnd;
-unsigned long ubp_rmem_pressure;
-#define UB_RMEM_EXPAND 0
-#define UB_RMEM_KEEP 1
-@@ -177,6 +179,7 @@ #define ub_held_pages ppriv.ubp_held_pa
-struct sock_private spriv;
-#define ub_rmem_thres spriv.ubp_rmem_thres
-#define ub_maxadvmss spriv.ubp_maxadvmss
-+#define ub_rcv_wnd spriv.ubp_rcv_wnd
-#define ub_rmem_pressure spriv.ubp_rmem_pressure
-#define ub_wmem_pressure spriv.ubp_wmem_pressure
-#define ub_tcp_sk_list spriv.ubp_tcp_socks
-diff --git a/include/ub/ub_sk.h b/include/ub/ub_sk.h
-index e65c9ed..02d0137 100644
 a/include/ub/ub_sk.h
-+++ b/include/ub/ub_sk.h
-@@ -34,6 +34,8 @@ struct sock_beancounter {
-*/
-unsigned long poll_reserv;
-unsigned long forw_space;
-+ unsigned long ub_tcp_rcvbuf;
-+ unsigned long ub_rcv_wnd_old;
-/* fields below are protected by bc spinlock */
-unsigned long ub_waitspc; /* space waiting for */
-unsigned long ub_wcharged;
-diff --git a/kernel/ub/ub_net.c b/kernel/ub/ub_net.c
-index 74d651a..afee710 100644
 a/kernel/ub/ub_net.c
-+++ b/kernel/ub/ub_net.c
-@@ -420,6 +420,7 @@ static int __sock_charge(struct sock *sk
-
-added_reserv = 0;
-added_forw = 0;
-+ skbc-ub_rcv_wnd_old = 0;
-if (res == UB_NUMTCPSOCK) {
-added_reserv = skb_charge_size(MAX_TCP_HEADER +
-1500 - sizeof(struct iphdr) -
-@@ -439,6 +440,7 @@ static int __sock_charge(struct sock *sk
-added_forw = 0;
-}
-skbc-forw_space = added_forw;
-+ skbc-ub_tcp_rcvbuf = added_forw + SK_STREAM_MEM_QUANTUM;
-}
-spin_unlock_irqrestore(ub-ub_lock, flags);
-
-@@ -528,6 +530,7 @@ void ub_sock_uncharge(struct sock *sk)
-skbc-ub_wcharged, skbc-ub, skbc-ub-ub_uid);
-skbc-poll_reserv = 0;
-skbc-forw_space = 0;
-+ ub-ub_rcv_wnd -= is_tcp_sock ? tcp_sk(sk)-rcv_wnd : 0;
-spin_unlock_irqrestore(ub-ub_lock, flags);
-
-uncharge_beancounter_notop(skbc-ub,
-@@ -768,6 +771,44 @@ static void ub_sockrcvbuf_uncharge(struc
-* UB_TCPRCVBUF
-*/
-
-+/*
-+ * UBC TCP window management mechanism.
-+ * Every socket may consume no more than sock_quantum.
-+ * sock_quantum depends on space available and 
ub_parms[UB_NUMTCPSOCK].held.
-+ */
-+static void ub_sock_tcp_update_rcvbuf(struct user_beancounter *ub,
-   + struct sock *sk)
-+{
-+ unsigned long allowed;
-+ unsigned long reserved;
-+ unsigned long available;
-+ unsigned long sock_quantum;
-+ struct tcp_opt *tp = tcp_sk(sk);
-+ struct sock_beancounter *skbc;
-+ skbc = sock_bc(sk);
-+
-+ if( ub-ub_parms[UB_NUMTCPSOCK].limit * ub-ub_maxadvmss
-   +  ub-ub_parms[UB_TCPRCVBUF].limit) {
-   + /* this is defenitly shouldn't happend */
-   + return;
-   + }
-   + allowed = ub-ub_parms[UB_TCPRCVBUF].barrier;
-   + ub-ub_rcv_wnd += (tp-rcv_wnd - skbc-ub_rcv_wnd_old);
-   + skbc-ub_rcv_wnd_old = tp-rcv_wnd;
-   + reserved = ub-ub_parms[UB_TCPRCVBUF].held + ub-ub_rcv_wnd;
-   + available = (allowed  reserved)?
-   + 0:allowed - reserved;
-   + sock_quantum = max(allowed / ub-ub_parms[UB_NUMTCPSOCK].held,
-   + ub-ub_maxadvmss);
-   + if ( skbc-ub_tcp_rcvbuf  sock_quantum) {
-   + skbc-ub_tcp_rcvbuf = sock_quantum;
-   + } else {
-   + skbc-ub_tcp_rcvbuf += min(sock_quantum - 

Re: AW: [Users] openvz and qemu

2008-01-31 Thread Kirill Korotaev
check vzctl sources. it calls other OVZ-specific syscalls by their numbers,
like sys_setluid, sys_setublimit etc.

For creating fairsched node you'll need to call fairsched_mknod first, then move
the process to this node using fairsched_mvpr
 

Dietmar Maurer wrote:
 OK, basically found the syscall interface in kernel/fairsched.c
 
 But how do I find out the syscall numbers, i.e how do I 
 call sys_fairsched_mknod() with userspace syscall()
 
 - Dietmar
 
 -Ursprüngliche Nachricht-
 Von: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] Im Auftrag von Dietmar Maurer
 Gesendet: Freitag, 01. Februar 2008 07:05
 An: users@openvz.org
 Betreff: [Users] openvz and qemu

 Hi all,

 i want to run a few qemu VMs inside VE0. Just wonder how to 
 set fairsched properties for a single process?

  - howto create a new faisched group (for the qemu process)?
  - howto assign weight/rate

 or is that not possible at all?

 
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Split VE disk space between / and /var?

2008-01-30 Thread Kirill Korotaev
it's possible to do the following:
- create some directory, say /vz/private/ID.2
- turn on vzquota on this directory (quota ID maybe VEID+some bug number, say 
100)
- bind (u)mount /vz/private/ID.2 to /vz/root/ID/var in VE.(u)mount scripts.


Jan Tomasek wrote:
 Hi,
 
 I was asked for VE with disk space splitup this way:
 
 /using 3GB
 /varusing 12GB
 
 is this posible with OpenVZ? Only way I figured out is to create special
 partion on HW node with required size and allow access to device for VE,
 or mount it outside and use mount -o bind. Both ways are not very nice...
 
 I'm asking to be sure there is no simpler way how achieve such split.
 
 Thanks
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Split VE disk space between / and /var?

2008-01-30 Thread Kirill Korotaev


Jan Tomasek wrote:
 Kirill Korotaev wrote:
 it's possible to do the following:
 - create some directory, say /vz/private/ID.2
 - turn on vzquota on this directory (quota ID maybe VEID+some bug number, 
 say 100)
 - bind (u)mount /vz/private/ID.2 to /vz/root/ID/var in VE.(u)mount scripts.
 
 That is quite interesting thanks! But I need bit more help.
 
 Start of VE is fine:
 
 chlivek:~# vzctl start 192003
 Starting VE ...
 + mount --bind /staj/vz/private/192003.mnt /staj/vz/root/192003/mnt
 VE is mounted
 Adding IP address(es): 192.168.0.3
 Setting CPU units: 1000
 Configure meminfo: 65536
 Set hostname: dns
 VE start in progress...
 
 But stop not:
 
 chlivek:~# vzctl enter 192003
 entered into VE 192003
 [EMAIL PROTECTED]:/# ls /mnt
 big  x  xx
 
 chlivek:~# vzctl stop 192003
 Stopping VE ...
 VE was stopped
 + ls -l /staj/vz/root/192003/mnt
 total 0
 + umount /staj/vz/root/192003/mnt
 umount: /staj/vz/private/192003.mnt: not mounted
 umount: /staj/vz/private/192003.mnt: not mounted

that's because VE distro specific init scripts do umount of everything on stop.
so you can ignore this warning (but leave umount just in case).

 + exit 0
 VE is unmounted
 
 Mount point /staj/vz/root/192003/mnt doesn't exist at moment of calling.
 But when I do not call umout it hangs in system mount tab.

check -n option to mount.

 
 I'm also interested if it's posible to have inside VE df showing correct
 info, instead of total HW node capacity.
 
 [EMAIL PROTECTED]:/# df -h
 FilesystemSize  Used Avail Use% Mounted on
 simfs 1.0G  370M  655M  37% /
 /dev2/root2   1.1T   46G 1020G   5% /mnt
 tmpfs 4.0G 0  4.0G   0% /lib/init/rw
 tmpfs 4.0G 0  4.0G   0% /dev/shm

have you called vzquota on on this directory?
plz check vzquota init, on, off commands.
you will need to assign some numerical ID to this quota directory (should be 
different from VEID),
setup vzquota on it and simply turn it on before bind mount.

Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ, Open ISCSI OpenSolaris

2008-01-23 Thread Kirill Korotaev
Mart,

1. OpenVZ doesn't interfere with storage and we run locally some services
   using RHEL5(2.6.18) + iscsi internally w/o any problems.
2. If I'm correct, you are using too way old OVZ kernel. plz update.
3. have you used iSCSI packages from Debian etch or compiled/installed anything
   yourself?

Kirill


Mart van Santen wrote:
 Hello,
 
 On one of our servers we are running OpenVZ for some time without any
 problems. But yesterday I started a virtual machine based on a
 iscsi-target, this target is running on opensolaris. After a few hours
 without problems, the system totally collapsed by the cause of some
 ip-conflicts. The machine running openvz was claiming IP-adres space, or
 sending ip-frames with the same source ip as the storage server, and
 because of this, the storage system disabled the network interfaces now
 and then. I don't think this is caused by a bug in openvz, but because
 we are running openiscsi in combination with opensolaris for some time,
 i just want to know if it is possible that openvz interacts with the
 storage/openiscsi layer and some error can raise because of that.
 
 The details:
 
 iscsi-target machine:
 intel 64-bit OpenSolaris/SunOS 5.10
 Target is on a zfs volume
 
 open-iscsi:
 version: 2.0-868-test1
 This version because i had some other problems with previous versions
 
 openVZ machine:
 
 kernel: 2.6.18-1-openvz
 patch: 028.18
 platform: intel 64-bit
 os: debian etch
 
 extra sysctl.conf settings on openvz machine:
 net.ipv4.tcp_max_tw_kmem_fraction=384
 net.ipv4.tcp_max_tw_buckets_ub=16536
 
 
 The problems:
 
 During the night suddenly ping to our storage server dropped several
 times. This is caused by sunos, because it discovers that an other
 machine is using the same ip, and than disables the network interface
 for some time and after some timout then tries to recover the
 IP/interface. The mac-addresses in the SunOS log matches with the
 hardware addresses of the openvz machine.
 
 On the OpenVZ machine:
 
 Around the same time, the errors at the bottom of this email occurred.
 If the timing of all logfiles is correct, it looks like that these
 errors occurred first, and then the problems with the IP's occurred. I
 wonder if this has to do anything with the interface stack of openvz
 conflicting with the lowlevel access of the interface by open-iscsi.
 Maybe the IP-stack mirrored packets from the solaris machine or
 something like that. On the machine without openvz but nearly the same
 kernel and the same iscsi stack we didn't had any of these problems.
 
 I hope anyone has a good hint. Are other people using iscsi targets for
 storage and is this done on the same interface as the external network
 etc. Are there any ip/tcp system settings where I have to take care of...
 
 
 Kind regards,
 
 
 Mart van Santen
 
 
 
 Jan 23 08:30:41 krypton kernel:  session0: iscsi: session recovery timed
 out after 120 secs
 Jan 23 08:30:41 krypton kernel: iscsi: cmd 0x28 is not queued (7)
 Jan 23 08:30:41 krypton kernel: sd 1:0:0:0: SCSI error: return code =
 0x0001
 Jan 23 08:30:41 krypton kernel: end_request: I/O error, dev sdb, sector
 4969152
 Jan 23 08:30:41 krypton kernel: iscsi: cmd 0x28 is not queued (7)
 Jan 23 08:30:41 krypton kernel: sd 1:0:0:0: SCSI error: return code =
 0x0001
 Jan 23 08:30:41 krypton kernel: end_request: I/O error, dev sdb, sector
 4969152
 Jan 23 08:30:41 krypton kernel: iscsi: cmd 0x2a is not queued (7)
 Jan 23 08:30:41 krypton last message repeated 6 times
 Jan 23 08:30:41 krypton kernel: sd 1:0:0:0: SCSI error: return code =
 0x0001
 Jan 23 08:30:41 krypton kernel: end_request: I/O error, dev sdb, sector
 37901744
 Jan 23 08:30:41 krypton kernel: sd 1:0:0:0: SCSI error: return code =
 0x0001
 Jan 23 08:30:41 krypton kernel: end_request: I/O error, dev sdb, sector
 38917752
 Jan 23 08:30:41 krypton kernel: sd 1:0:0:0: SCSI error: return code =
 0x0001
 Jan 23 08:30:41 krypton kernel: end_request: I/O error, dev sdb, sector
 38918128
 Jan 23 08:30:41 krypton kernel: sd 1:0:0:0: SCSI error: return code =
 0x0001
 Jan 23 08:30:41 krypton kernel: end_request: I/O error, dev sdb, sector
 3543176
 Jan 23 08:30:41 krypton kernel: sd 1:0:0:0: SCSI error: return code =
 0x0001
 Jan 23 08:30:41 krypton kernel: printk: 18 messages suppressed.
 Jan 23 08:30:41 krypton kernel: Buffer I/O error on device sdb1, logical
 block 4864715
 Jan 23 08:30:41 krypton kernel: lost page write due to I/O error on sdb1
 Jan 23 08:30:41 krypton kernel: Buffer I/O error on device sdb1, logical
 block 4864762
 Jan 23 08:30:41 krypton kernel: lost page write due to I/O error on sdb1
 Jan 23 08:30:41 krypton kernel: end_request: I/O error, dev sdb, sector
 36712096
 Jan 23 08:30:41 krypton kernel: sd 1:0:0:0: SCSI error: return code =
 0x0001
 Jan 23 08:30:41 krypton kernel: end_request: I/O error, dev sdb, sector
 4676184
 Jan 23 08:30:41 krypton kernel: sd 1:0:0:0: SCSI error: return code =
 0x0001
 Jan 23 08:30:41 krypton 

Re: [Users] Live Migration Fails

2007-12-24 Thread Kirill Korotaev
Can you post output of dmesg here plz?
Looks like your installation somehow broken.
I can hardly imagine why sed should normally  catch SIGSEGV.
Maybe it is compiled for some more modern CPU than you use on destinaton 
machine?
Or vice versa - maybe you use ancient distribution in VE?

So plz provide more info like what OS distribution is running inside VE,
dmesg output, and what other error you experienced during online migration
after fixed vzmigrate as Thorsten suggested?

Kirill


Pablo L. Arturi wrote:
Pablo L. Arturi wrote:

Hello guys, does anyone knows why while trying to migrate a VE I get

this

error?

[EMAIL PROTECTED] ~]# vzmigrate -r no --online --keep-dst -v 10.0.10.251

111

OPT:-r
OPT:--online
OPT:--keep-dst
OPT:-v
OPT:10.0.10.251
Starting online migration of VE 111 on 10.0.10.251
OpenVZ is running...
Loading /etc/vz/vz.conf and /etc/vz/conf/111.conf files
Check IPs on destination node: 190.2.55.204 10.0.10.204
Preparing remote node
Copying config file
111.conf  100%  882 0.9KB/s

00:00

Saved parameters for VE 111
/usr/sbin/vzmigrate: line 382: [: missing `]'
Creating remote VE root dir
Creating remote VE private dir
VZ disk quota disabled -- skipping quota migration
Syncing private
Live migrating VE
Suspending VE
Setting up checkpoint...
suspend...
get context...
Checkpointing completed succesfully
Dumping VE
Setting up checkpoint...
join context..
dump...
Can not dump VE: Invalid argument
iptables-save exited with 255
Checkpointing failed
Error:  Failed to dump VE
Resuming...
Running: /usr/lib/vzctl/scripts/vps-net_add
put context
The migration is from [EMAIL PROTECTED] to [EMAIL PROTECTED]

This is HWN configurations:

[EMAIL PROTECTED] ~]# uname -a
Linux localhost.localdomain 2.6.18-ovz028stab035.1-smp #1 SMP Sat Jun
 
 9
 
12:15:32 MSD 2007 i686 i686 i386 GNU/Linux


[EMAIL PROTECTED] ~]# rpm -qa | grep vz
vzrpm44-4.4.1-22.5
vzrpm43-python-4.3.3-7_nonptl.6
vzyum-2.4.0-11
vzpkg-2.7.0-18
vztmpl-fedora-core-3-2.0-2
vztmpl-fedora-core-5-2.0-2
vzctl-3.0.22-1
ovzkernel-smp-2.6.9-023stab032.1
ovzkernel-smp-2.6.16-026test020.1
vzrpm44-python-4.4.1-22.5
vzrpm43-4.3.3-7_nonptl.6
vztmpl-centos-4-2.0-2
vztmpl-fedora-core-4-2.0-2
kernel-smp-2.6.18-ovz028stab035.1
vzctl-lib-3.0.22-1
vzquota-3.0.11-1

[EMAIL PROTECTED] ~]# uname -a
Linux ovz98.dnsba.com 2.6.18-53.el5.028stab051.1 #1 SMP Fri Nov 30

03:05:22

MSK 2007 i686 athlon i386 GNU/Linux

[EMAIL PROTECTED] ~]# rpm -qa | grep vz
ovzkernel-2.6.18-8.1.4.el5.028stab035.1
vzrpm44-4.4.1-22.5
vzrpm43-python-4.3.3-7_nonptl.6
vzyum-2.4.0-11
vztmpl-centos-4-2.0-2
vztmpl-fedora-core-4-2.0-2
vzctl-3.0.22-1
ovzkernel-2.6.18-53.el5.028stab051.1
vzrpm44-python-4.4.1-22.5
vzrpm43-4.3.3-7_nonptl.6
vzpkg-2.7.0-18
vztmpl-fedora-core-3-2.0-2
vztmpl-fedora-core-5-2.0-2
vzctl-lib-3.0.22-1
vzquota-3.0.11-1



Any idea?


Hi Pablo,

yep, since the last git commit to vzmigrate (-

 http://git.openvz.org/?p=vzctl;a=commitdiff;h=ebd5fb00a4eb0134d7ef4ebfdc0b6ae43d07d8fd
 
), the extended test command was removed, so change at line 382:

if [ $? != 20  $? != 21  $? != 0 ]; then

to:
if [[ $? != 20  $? != 21  $? != 0 ]]; then

Bye,
  Thorsten

Hello Thorsten, thanks for your reply.

I changed the line 382 and the '/usr/sbin/vzmigrate: line 382: [: missing
`]' dissapeared. But I keep getting the other errors which doesn't let the
VE to be migrated.
Any other idea?

Thank you,
Pablo
 
 
 I have tried the offline migration (excluding --online) from the command
 line, and the VE was migrated. But when I try to start it I get the
 following error:
 
 [EMAIL PROTECTED] 111]# vzctl restart 301
 Restarting VE
 Starting VE ...
 VE is mounted
 Adding IP address(es): NNN.NNN.NNN.NNN
 bash: line 61: 29723 Segmentation fault  /bin/sed -e
 s|^$name=.*|$name=\$value\| ${file} ${file}.$$
  ERROR: Can't change file /etc/sysconfig/network
 Setting CPU units: 41562
 Configure meminfo: 145261
 Set hostname: opertime.dnsba.com
 bash: line 61: 29740 Segmentation fault  /bin/sed -e
 s|^$name=.*|$name=\$value\| ${file} ${file}.$$
  ERROR: Can't change file /etc/sysconfig/network
 VE start in progress...
 [EMAIL PROTECTED] 111]# vzlist -a
 
 I tried also rsync-ing the entire private area with rsync -avz, but the
 same happens.
 
 Any idea on this?
 
 THank you,
 Pablo
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] 64-bit host, 32-bit guests and iptables

2007-12-20 Thread Kirill Korotaev
Cliff Wells wrote:
 On Wed, 2007-12-19 at 20:46 +0300, Kirill Korotaev wrote:
 
Cliff, moreover, we drop support of 2.6.22 and want to develop 2.6.24 up to
really stable (for Ubuntu release).
 
 
 So does this mean 2.6.24 is slated as the replacement for 2.6.18?

no. 2.6.18 will live very very long, as long as RHEL5 will live.
(at least 4-5 years AFAIR)

The same way 2.6.9-RHEL4 is also alive and supported and is not going to die 
yet.

2.6.24 is developed for Ubuntu Long Term Support Server
and won't have new OVZ features compared to 2.6.18.
i.e. we are currently very much commited to 2.6.18-RHEL5
and will add features there (even to stable branch) as long as needed.

So will do the best to handle your bug reports ASAP.
 
 Sounds like a plan.  I'll upgrade my kernel later today.  

make sure you take it from git branch 2.6.24-openvz in that repo.

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] 64-bit host, 32-bit guests and iptables

2007-12-19 Thread Kirill Korotaev
Cliff, moreover, we drop support of 2.6.22 and want to develop 2.6.24 up to
really stable (for Ubuntu release).
So will do the best to handle your bug reports ASAP.

Thanks,
Kirill


Cliff Wells wrote:
 On Wed, 2007-12-19 at 12:17 +0300, Kirill Korotaev wrote:
 
AFAIK, yes, issues are still there. Situation should be better in 2.6.24
(can be found in git). If you have time to check/test it, we would be very 
thankful.
 
 
 We are both in luck =)  The server isn't in production yet but has about
 10 VE's setup.  I'll give 2.6.24 a try and report back.
 
 Thanks,
 Cliff
 
 
Cliff Wells wrote:

I've heard that there are issues with running iptables commands from a
32-bit guest on a 64-bit host, but I've also seen at least one patch[1]
submitted to help resolve this issue.

I'm running 2.6.22-ovz005 and vzctl from git.  Is this still an issue in
this version?  Obviously I can setup iptables on the host, but it would
be nice to allow the guest to control their own firewall rules.

Regards,
Cliff 


[1] http://forum.openvz.org/index.php?t=msggoto=1687;


___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Cloning and permissions

2007-12-17 Thread Kirill Korotaev
Peter,

It depends on what you mean by cloning. What exact command/operations
you did?

BTW, do you mean OpenVZ or Virtuozzo?

Thanks,
Kirill



Peter Machell wrote:
 After cloning a Debian host, I found everything working except MySQL.
 
 I had to chown its binaries, databases and log folder back to mysql  
 from root.
 
 Is this normal and should I expect other permissions to have changed?  
 When copying the VZ does the ownership of files take on whatever  
 matches the UID on the host system?
 
 regards,
 Peter.
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] VE fails to stop

2007-12-11 Thread Kirill Korotaev
Ubuntu 7.10 requires the latest vzctl (check http://git.openvz.org to compile 
it)
otherwise init does busy loop on VE start (plz check it),
so it can hang and not stop.
(`kill -KILL VE-init-pid` from host system will kill the whole VE
very quickly :@) still)

Kirill


Cliff Wells wrote:
 Hardware: dual Opteron 242
 Kernel: linux-2.6.18-openvz-028.049
 Host OS: Gentoo
 Guest OS: ubuntu-7.10-i386-minimal
 
 Both host and VE are fresh installs.
 
 I noticed that vzctl stop 101 would fail with
 
 Stopping VPS ...
 Unable to stop VPS, operation timed out
 
 
 This of course also prevented me from properly shutting down the system
 (power off required).
 
 I found a post [1] that seemed related, which led to investigating halt
 commands:
 
 vps1 ~ # vzctl enter 101
 [EMAIL PROTECTED]:/# halt -p
 shutdown: Unable to send message: Connection refused
 [EMAIL PROTECTED]:/# halt
 shutdown: Unable to send message: Connection refused 
 [EMAIL PROTECTED]:/dev# halt -fp
 got signal 9
 exited from VE 101
 vps1 ~ #
 
 
 Does this appear to be a problem with OVZ or with the guest template?
 
 
 
 [1] http://forum.openvz.org/index.php?t=msggoto=6354;
 
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ specific patchset

2007-12-11 Thread Kirill Korotaev
DULMANDAKH Sukhbaatar wrote:
the goal is to have it stable for Ubuntu TLS release, so it's around 
Februrary 2008.
 
 
 it would be great. Someone is already working on it. You can see it
 from https://wiki.ubuntu.com/ServerTeam/HardyIdeaPool .

thanks! will definetely contact him!

Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: AW: [Users] init output

2007-12-11 Thread Kirill Korotaev
or you can replace /sbin/init with some script adding this to env
and running original init binary.

Kirill

Dietmar Maurer wrote:
 But I cant do that with current vzctl, instead I need to modify the
 source?
 
 env.c 301: char *envp[] = {HOME=/, TERM=linux,
 CONSOLE=/var/log/init.log, NULL}; 
 
 Is that the way to do it?
 
 - Dietmar
 
 
you can run VE /sbin/init with environment variable CONSOLE 
set to some file.
in this case it's output will get into file.
# man init
 
 
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] perl LOCALE issue- and solution

2007-12-10 Thread Kirill Korotaev
was it -minimal template?
Some of templates AFAIK have removed locales,
since locales take really much space (~20Mb) while not needed in most cases
(except for the default C one).

Thanks,
Kirill


Michael Klatsky wrote:
 Hello all-
 
 I ran into a puzzling issue and found a solution- but I am wondering
 what the root cause really was, and whether others have run into this:
 
 After create a VE using the repo provided centos-4-i386-default
 template, I entered the VE via ssh. When running perl (any perl
 script), I got the message:
 
 perl: warning: Setting locale failed.
 perl: warning: Please check that your locale settings:
 LANGUAGE = en_US:en,
 LC_ALL = (unset),
 LANG = en_US
 are supported and installed on your system.
 perl: warning: Falling back to the standard locale (C)
 
 After doing a bit of hunting on methods to set this, including these pages:
 http://www.in-ulm.de/~mascheck/locale/#short
 http://perldoc.perl.org/perllocale.html#Permanently-fixing-your-system's-locale-configuration
 
 I started looking closely at glibc-common, as when I did locale -a I
 got the message that locale directories could not be found.
 
 I checked, and indeed- rpm -q glibc-common reported that the package
 was installed. However, after checking some of the files included that
 should have existed, I found that the local dirs were not there
 (example: /usr/lib/locale/en_US/LC_TIME). So, I grabbed the
 glibc-common rpm and did a rpm -ivh --force,  and voila- all was
 properly installed.
 
 The purpose of my post is to document this for others who may have run
 into this, and t solicit any theories as to why that package was
 phantomly installed. Significantly, other than the locale issue- the
 system was operating properly.
 
 Thanks- and so far quite impressed
 
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Help disk qouta error

2007-12-10 Thread Kirill Korotaev
Gregor Mosheh wrote:
 Info Ishaak wrote:
 
Starting VPS ...
vzquota : (error) Quota on syscall for 101: File exists
vzquota on failed [3]
 
 
 Try stopping the VPS and dropping the quota file, then restarting both:
vzctl stop 101
vzquota drop 101
vzquota on 101

So, you turned quota on and ...

vzctl start 101

... vzctl tells you about quota already being on on VE start.

did you want to recalculate quota via drop?
Then just drop, and vzctl start will recalculate automatically.
no need for vzquota on command.

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ specific patchset

2007-12-10 Thread Kirill Korotaev
 Does 2.6.24-ovz use the namespace code parts merged in 2.6.24?

Sure. We even backport these patches back to our branch to use and test it.

 Is there any target for an OpenVZ merge into mainline (or at least a
 merge of the network parts)?

I think it's hard to predict exactly...

Our current plan is to have IPv4/routing/rules virtualization in a 1-2 months,
IPv6 should go faster after that (~1 month).
Then vlan, tun/tap, netfilters and other features.

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ specific patchset

2007-12-10 Thread Kirill Korotaev
DULMANDAKH Sukhbaatar wrote:
 On Dec 11, 2007 1:09 AM, Kirill Korotaev [EMAIL PROTECTED] wrote:
 
Does 2.6.24-ovz use the namespace code parts merged in 2.6.24?

Sure. We even backport these patches back to our branch to use and test it.
 
 
 What will happen with 2.6.22? Will it become stable branch or will die
 like 2.6.20?

will die.

Is there any target for an OpenVZ merge into mainline (or at least a
merge of the network parts)?

I think it's hard to predict exactly...

Our current plan is to have IPv4/routing/rules virtualization in a 1-2 months,
IPv6 should go faster after that (~1 month).
Then vlan, tun/tap, netfilters and other features.
 
 
From Kir's mail we assume that 2.6.24 soon will be released and marked
 as 'development'. And is there any plan about when it will be stable
 or something like that?

the goal is to have it stable for Ubuntu TLS release, so it's around Februrary 
2008.

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] recommended swap space

2007-12-07 Thread Kirill Korotaev
I wouldn't agree with you that easily.
Usually applications active data set (i.e. data which are frequently accessed)
is ~10-50% of the whole RSS.
Thus it allows to swap out most of apps memory w/o much performance penalty.

I've just checked a couple of production nodes I have access to and see for
example following numbers:
8GB RAM, 2GM swap total, 33 big VEs = 1.2GB swap used
8GB RAM, 16GB swap total, 125 VEs   = ~8GB swap used
both machines work fine and statistics show normal latencies.

I would express it as:
1. swap can hardly be used for memory hungry applications like
   math calculations and similar stuff doing aggressive memory access
2. for common kind of applications swap allows to increase effective memory
   size quite significantly (compares to RAM size) and it also allows
   to provide overcommits for memory, i.e. have a summary of VE limits
   bigger then total RAM. if some of your VEs will have a bust in memory 
consumtion
   you always know that SWAP will help to handle it.

So depending on this one can select swap size:
a) If you don't know your workload and plan to overcommit your machine,
   swap size 1-2RAM is a good option.
b) If on the other hand you know that your apps will never be able
   to consume whole RAM, then swap can be minimal.

Thanks,
Kirill


Dariush Pietrzak wrote:
documentation says swap should be pyhsical RAM*2.
 
  This rule was created when HDD were many times faster compared to RAM then
 they are today(and when programs needed way more virtual space in relation
 to what could be available).
  Imagine how long it would take read/write 32G from HDD..., also, most
 really large requirements for ram come from various layers of essentially
 caching. In the 90s it was quite typical to run servers with half of
 virtual space permanently swapped out ( 64M ram machine, with 128M swap,
 and never less then 64M swap used, 512M machine with 1G swap and never less
 then 512M of swap used etc..). 
  It was possible to do that, because of large amounts of inactive code/very
 rarely called code in programs, thus you could safely swap out half of the
 code and safely assume that it won't ever be needed. 
  These days, most of ram goes to data, not to code, and alot of stuff works
 like hash tables - every single particular page of data is accessed relatively
 infrequently (thus, it would be swapped out) but there are a lot of such
 accesses and if you wouldn't want to make them wait for HDD.
 
  As a rule of a thumb, I assume that 1rpm HDD can't handle swap larger
 then ~512M-1G and 1500rpm HDD shouldn't be burdened with more then 1-2G of
 swap.
 
 
Should I really use 32GB swap space for such machine?
 
  If you know that your machine will still run with ~30G swapped out...
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] strange problem with nagios nrpe server

2007-12-06 Thread Kirill Korotaev
Steve Wray wrote:
 Gregor Mosheh wrote:
 
The good news is that I use Nagios with our VPSs, and it works brilliantly.


include_dir=/etc/nagios/nrpe.d
I have found that while this directive works under Xen this does not 
work under openvz.

I find that surprising. Are you sure that the permissions didn't get 
mangled when you copied it over to ovz? That's the first thing I'd 
check: making sure that /etc/nagios/nrpe.d is in fact a directory, and 
that's readable by the user who runs nrpe (user nagios?).
 
 
 Believe me, thats the first thing I checked.
 
 I've run nrpd under strace and see nothing out of the ordinary; it finds 
 the correct number of files in the nrpd.d directory. Not much of a whizz 
 with strace tho so don't know where to go from here.

Can you take
# strace -f -o somefile nrpe
from both working (Xen) and non-working (OVZ) installations?

Is it possible to get a login with exact instructions how to reproduce the issue
on your box? this would help to resolve it ASAP and make sure it's not a black 
magic :@)

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] strange problem with nagios nrpe server

2007-12-06 Thread Kirill Korotaev
Steve Wray wrote:
 Just one other possible data point.
 
 I may have just dismissed these problems as some kind of creeping 
 senility but I've seen some other bizarre issues with VMs migrated into 
 OpenVZ.
 
 One of these is to do with Samba filesharing.
 
 When the VM is migrated into OpenVZ from Xen, samba fileshares on the VM 
 can be accessed from Windows *only* by FQDN not by bare hostname.
 
 Note that this broke *existing* mapped network drives for Windows users.
 
 Also note that this did *not* affect Linux nor OSX clients; only Windows.
 
 Since I've verified that this wierdness is *only* apparent when the VM 
 was running under OpenVZ not under Xen I'm not inclined to believe that 
 I am going insane when I find that NRPE under Debian Sarge has a problem 
 when running under OpenVZ and not under Xen.


 It starts to seem that OpenVZ can produce all *kinds* of unpredictable 
 behavior... either that or I really am going mad complete with 
 hallucinations :-/ Not discounting that possibility out of hand...

Oh, don't say so. Everything should have a logical explanation.
And I guess I know the answer to this one.

First of all, plz check that you don't have any kind of firewall
rules in host system and VE with 'iptables -L'.

But the real suspect is broadcast network messages from NetBIOS protocol.
Working FQDN means that host can be found via DNS and by IP.
Non-working short hostnames mean that your hosts are not setup
in default domain in DNS and that name resolution via netbios failed.

You need to connect your VE to ethX adapter using veth (virtual ethernet)
adapter and Linux bridge. This will allow use of network broadcasts.
The default venet networking is a secure IP-level networking which filters
out broadcasts.

http://wiki.openvz.org/Virtual_Ethernet_device
http://wiki.openvz.org/Differences_between_venet_and_veth
http://forum.openvz.org/index.php?t=msggoto=7295srch=samba#msg_7295
http://en.wikipedia.org/wiki/NetBIOS

Forseeing your question about why venet is used as default networking type:
1. venet is more secure (see wiki).
2. venet is more scalable up to hundrends and thousands of VEs,
   while veth/ethernet/bridge broadcasts/multicasts will simply kill (DoS) the 
node
   in case of many VEs.

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] strange problem with nagios nrpe server

2007-12-06 Thread Kirill Korotaev
BTW,

Do you use/have WINS server? it is usually used for names resolution
and can be used w/o broadcasts, so it should work even with your current
configuration.

http://www.oreilly.com/catalog/samba/chapter/book/ch07_03.html

Thanks,
Kirill


Kirill Korotaev wrote:
 Steve Wray wrote:
 
Just one other possible data point.

I may have just dismissed these problems as some kind of creeping 
senility but I've seen some other bizarre issues with VMs migrated into 
OpenVZ.

One of these is to do with Samba filesharing.

When the VM is migrated into OpenVZ from Xen, samba fileshares on the VM 
can be accessed from Windows *only* by FQDN not by bare hostname.

Note that this broke *existing* mapped network drives for Windows users.

Also note that this did *not* affect Linux nor OSX clients; only Windows.

Since I've verified that this wierdness is *only* apparent when the VM 
was running under OpenVZ not under Xen I'm not inclined to believe that 
I am going insane when I find that NRPE under Debian Sarge has a problem 
when running under OpenVZ and not under Xen.


It starts to seem that OpenVZ can produce all *kinds* of unpredictable 
behavior... either that or I really am going mad complete with 
hallucinations :-/ Not discounting that possibility out of hand...
 
 
 Oh, don't say so. Everything should have a logical explanation.
 And I guess I know the answer to this one.
 
 First of all, plz check that you don't have any kind of firewall
 rules in host system and VE with 'iptables -L'.
 
 But the real suspect is broadcast network messages from NetBIOS protocol.
 Working FQDN means that host can be found via DNS and by IP.
 Non-working short hostnames mean that your hosts are not setup
 in default domain in DNS and that name resolution via netbios failed.
 
 You need to connect your VE to ethX adapter using veth (virtual ethernet)
 adapter and Linux bridge. This will allow use of network broadcasts.
 The default venet networking is a secure IP-level networking which filters
 out broadcasts.
 
 http://wiki.openvz.org/Virtual_Ethernet_device
 http://wiki.openvz.org/Differences_between_venet_and_veth
 http://forum.openvz.org/index.php?t=msggoto=7295srch=samba#msg_7295
 http://en.wikipedia.org/wiki/NetBIOS
 
 Forseeing your question about why venet is used as default networking type:
 1. venet is more secure (see wiki).
 2. venet is more scalable up to hundrends and thousands of VEs,
while veth/ethernet/bridge broadcasts/multicasts will simply kill (DoS) 
 the node
in case of many VEs.
 
 Thanks,
 Kirill
 
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] license daemons and MAC adresses in VEs

2007-12-03 Thread Kirill Korotaev
Thomas Sattler wrote:
Which MAC addresses are used inside the VE's? The real MAC add-
resses or some virtual addresses?

you can grant some ethX device exclusively to VE,
in this case it will be real MAC.

or you can create veth adapter with whatever MAC you want.
 
 
 Is it possible to re-use the real MAC on a virtual adapter,
 or does that (read: can eventually) cause problems?

if you don't put then real adapter and virtual one in the same bridge -
no conflicts will happen.

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Venet's ips disappearing...

2007-12-03 Thread Kirill Korotaev
Dariush Pietrzak wrote:
this can happen if some hotplug/udev event has happened and removed the routes
in host node. check /var/log/messages for any kind of events like eth link 
DOWN/UP,
DHCP lease reacquiring etc.
 
  I can't find anything like that, BUT I can correllate those events ( ie
 direct route entries in route table disappearing ) with network problems,
 but this shouldn't be visible to the host ( things like upstream provider
 loosing link, snow severely slowing down radio uplink etc ).
  This seems strange, and identical routes created by hand, and not by vz
 stay in place. This looks very much like vz is doing this, so this mail is
 a broadcast - did anyone else see something like this or something
 similiar?
 
  The funny part is that it happens only on freshly installed dev machine
 and not on machines I'm actually using;)

Hm... this is quite interesting...

Does it often happens?
I'm aware only of 2 places where routing can be removed:
1. hotplug, which I mentioned already.
   maybe some other clever daemon running on your system?
   RIP/OCFS?
2. OVZ crontab sript which should delete dead routes.
   can you check this one as well?

All I can suggest for debugging is to:
1. replace ip and route commands with a wrapper which logs the exact
   commands and when/who called them.
2. add similar debug to kernel.

BTW... does your host system uses DHCP or static IP assigned?

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Venet's ips disappearing...

2007-12-03 Thread Kirill Korotaev
Dariush Pietrzak wrote:
BTW... does your host system uses DHCP or static IP assigned?
 
  aaah, now that you mention it, this is the only dhcp-configured machine
 with openvz I've got around... and today I disabled DHCP server and few
 hours later noticed the problem with openvz. 
  This might be it, and it would explaing non-immediate connection with
 networking problems. Thanks.

Yep. DHCP client tries to be too smart :/
I've added this KB:
http://wiki.openvz.org/Networking:_disappering_routes_in_HN

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] license daemons and MAC adresses in VEs

2007-11-30 Thread Kirill Korotaev
you can grant some ethX device exclusively to VE,
in this case it will be real MAC.

or you can create veth adapter with whatever MAC you want.

Thanks,
Kirill

Thomas Sattler wrote:
 Hi there ...
 
 I'm new to openvz, therefore I apologize if my question is simple.
 
 I'd like to use openvz to run several small services on a big
 server. As fas as I read, openvz is just the right way to do that.
 
 My question is this: Among the 'small services' there will be at
 least two different license managers which check the MAC address
 of the nics to confirm they are running on the 'correct' server.
 
 Which MAC addresses are used inside the VE's? The real MAC add-
 resses or some virtual addresses?
 
 Thomas
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Talking to a SCSI tape device from VPS

2007-11-20 Thread Kirill Korotaev
have you granted VPS access to the device in question
using vzctl set --devices option?

Kirill

Jim Archer wrote:
 Is there any reason that software running in a VPS would be unable to drive 
 a tape device, so a backup server could run in a VPS?
 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Problems with plesk + openvz

2007-11-18 Thread Kirill Korotaev
Joan wrote:
 2007/11/10, Joan  [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]:
  2007/11/9, Marcin Owsiany [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED]:
   On Fri, Nov 09, 2007 at 01:06:31PM +0100, Joan wrote:
Well, 3.5Gb should be a fair amount of memory for that amount of
domains as I experienced with physical machines.
I would like to know what approaches have taken the people
 experiecing
similar issues...
  
   Limit MaxClients, MinSpareServers, MaxSpareServers and most
 importantly
   MaxRequestsPerChild. This way PHP will not have much time to
 leak too
   much memory, which should keep the usage down a bit.
  
  I tunned the following Timeout, MaxKeepAliveRequests,
 KeepAliveTimeout.
  And also the ones that you've told, specially  MaxClients and
  MaxRequestsPerChild wich seem to be the most important ones.
 
  Will see how it goes for the next days!
 Too bad, nothing changes, memory keeps increasing everytime until
 everything crashes silently, thanks to the alarms everytime it happens
 I can reboot the services, but it's not normal...
 Tomorrow I'l compare the parameters (ps, lsof, netsat) in the critical
 moments with the ones in normal time  and see.
 
 
 Ok, finally got time to check
 
 After some time of restarting the whole VeID lsof brings me some
 information:
 lsof | wc -l   - Has a value of 8560
 In the moment where it has almost no memory:
 wc -l lsof_with_problems  - The value is 30006
 
 Analyzing the file a bit further I can see that out of 30006 open files,
 the owner of 28266 is apache2
 
 I would guess that somehow apache is not closing the files, either for
 memory problems with openvz, or maybe because the non-threading
 configuration that can slowdown the apache process.
 Any clue?
 
 Shall I go to ask to apache mailing list? Or could it somehow be related
 to openVZ? 

Nope, it looks purely like apache problem.
You can also check the following:
- what are these files? ls -la /proc/pid/fds
  i.e. it can be sockets or files. what are they?
  `netstat -natp` output can be helpful as well
- plz post /proc/user_beancounters output.
  it always helps to analyze the resource shortage problems.
- what does apache say in the log when problem begin?
you can also ask Plesk support if issue is related to Plesk product.

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Venet's ips disappearing...

2007-11-18 Thread Kirill Korotaev
this can happen if some hotplug/udev event has happened and removed the routes
in host node. check /var/log/messages for any kind of events like eth link 
DOWN/UP,
DHCP lease reacquiring etc.

You can also replace ip and route utilities with some wrapper which logs who
and when removes the routes and find the one to blame.

Thanks,
Kirill


Dariush Pietrzak wrote:
 Hello, 
  I noticed that some of ips are sometimes disappearing from my host,
 on HN it looks like this:
 
 devamd:# vzlist
  VPSID  NPROC STATUS  IP_ADDR HOSTNAME
   1002  4 running 192.168.89.106  -
   1003  4 running 192.168.89.107  etchdev386
   1004  4 running 192.168.89.108  etchdevamd64
 
 but those ips are unavailable, ping doesn't work, so I check the routing:
 
 devamd:# ip r
 192.168.89.0/24 dev eth3  proto kernel  scope link  src 192.168.89.105
 default via 192.168.89.1 dev eth3
 
 I think it shouldn't look like this, so I try such sequence:
 
   devamd:~# vzctl set 1004 --ipadd 192.168.89.108
   Adding IP address(es): 192.168.89.108
   Unable to add IP 192.168.89.108: Address already in use
 
 strange, if it's in use then where can I see that?.. so:
 
 
   devamd:# vzctl set 1003 --ipdel 192.168.89.107
   Deleting IP address(es): 192.168.89.107
   vps-net_del WARNING: Function proxy_arp for eth3 is set to 0. Enable 
 with 'sysctl -w net.ipv4.conf.eth3.proxy_arp=1'. See 
 /usr/share/doc/vzctl/README.Debian.
 WARNING: Settings were not saved and will be resetted to original values on 
 next start (use --save flag)
 
   devamd:# vzctl set 1003 --ipadd 192.168.89.107
   Adding IP address(es): 192.168.89.107
   vps-net_add WARNING: Function proxy_arp for eth3 is set to 0. Enable 
 with 'sysctl -w net.ipv4.conf.eth3.proxy_arp=1'. See 
 /usr/share/doc/vzctl/README.Debian.
 WARNING: Settings were not saved and will be resetted to original values on 
 next start (use --save flag)
 
 and then it looks like this:
 
 devamd:# ip r
 192.168.89.107 dev venet0  scope link  src 192.168.89.105
 192.168.89.0/24 dev eth3  proto kernel  scope link  src 192.168.89.105
 default via 192.168.89.1 dev eth3
 
 
 is there a reason why such disappearing act should happen? (I'm running 
 fairly minimal debian etch on HN, there's only vzctl and few things like
 mtr, screen installed, nothing should be touching networking)
 
  this is my only amd64 machine using venet, on i386s with almost identical
 setup I haven't noticed anything similiar

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Loadavg virtualisation problem (028stab047.1+fix)

2007-11-18 Thread Kirill Korotaev
ok, Thanks a lot! I've added this info to the bug.

Thanks,
Kirill

Dariush Pietrzak wrote:
I guess that what causes it, is having in guest more processes with runnable
state then reduced virtual cpus available.
 
  Faster/simpler way:
  - go to guest, run 8x dd if=/dev/zero of=/dev/null
  - load should fairly quickly start creeping up to 8.0
  - on HN set cpus to 1
  - let it run, then stop all the dd's
 
  result:
 Guest:
 [EMAIL PROTECTED]:~/40p-ovz/work$ w
  16:13:44 up  5:21,  1 user,  load average: 2.99, 2.94, 2.20
 (and stays like this, 15-minute average even grows)
 HN:
 codev64:~# w
  16:14:05 up  5:25,  2 users,  load average: 0.00, 1.76, 2.65
 
 Playing with dd and set --cpus I also managed to cause load ~8 on HN while
 guest reported only 2.0
 
 .. I guess noone is doing things like this on production systems, and even
 if so, running chkpnt/restore as precaution when performing such changes 
 is not out of the question.

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


[Users] Re: PLEASE HELP !!!!

2007-11-07 Thread Kirill Korotaev
1. plz don't spam to all available mailing lists if you want help.
2. check dmesg output after `service vz start` failed.

Kirill

KaMraN wrote:
 Hi,
 
 I'm trying to install OpenVZ but I think its failed !
 
 Can you please help me ?
 
 [EMAIL PROTECTED] /]# /sbin/service vz start
 Running kernel is not OpenVZ kernel.   [FAILED]
 
 Why this happen ?
 
 What should I do ?
 
 Can you tell me step by step I want to install OpenVZ on CentOS Linux (
 Intel DUAL CORE E2140 ) ?
 
 and I want to install it via YUM !
 
 Please tell me YUM install ovzkernel-?? ( I want the latest stable
 version )
 
 and the configuration in grub and bootloader !
 
 Is this correct ?
 
 -
 # grub.conf generated by anaconda
 #
 # Note that you do not have to rerun grub after making changes to this file
 # NOTICE:  You have a /boot partition.  This means that
 #  all kernel and initrd paths are relative to /boot/, eg.
 #  root (hd0,0)
 #  kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
 #  initrd /initrd-version.img
 #boot=/dev/sda
 default=0
 timeout=5
 splashimage=(hd0,0)/grub/splash.xpm.gz
 hiddenmenu
 title CentOS (2.6.18-8.1.15.el5.028stab047.1)
 root (hd0,0)
 kernel /vmlinuz-2.6.18-8.1.15.el5.028stab047.1 ro
 root=/dev/VolGroup00/$
 initrd /initrd-2.6.18-8.1.15.el5.028stab047.1.img
 title CentOS (2.6.18-8.el5)
 root (hd0,0)
 kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/VolGroup00/LogVol00
 initrd /initrd-2.6.18-8.el5.img
 
 
 title OpenVZ (2.6.8-022stab029.1)
 root (hd0,0)
 kernel /vmlinuz-2.6.8-022stab029.1 ro root=/dev/sda5
 initrd /initrd-2.6.8-022stab029.1.img
 
 
 
 and sysctl ?
 
 
 --
 
 # Kernel sysctl configuration file for Red Hat Linux
 #
 # For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
 # sysctl.conf(5) for more details.
 
 # Controls IP packet forwarding
 net.ipv4.ip_forward = 1
 
 # Controls source route verification
 net.ipv4.conf.default.rp_filter = 1
 
 # Do not accept source routing
 net.ipv4.conf.default.accept_source_route = 0
 
 # Controls the System Request debugging functionality of the kernel
 kernel.sysrq = 1
 
 # Controls whether core dumps will append the PID to the core filename
 # Useful for debugging multi-threaded applications
 kernel.core_uses_pid = 1
 
 # Controls the use of TCP syncookies
 net.ipv4.tcp_syncookies = 1
 
 # Controls the maximum size of a message, in bytes
 kernel.msgmnb = 65536
 
 # Controls the default maxmimum size of a mesage queue
 kernel.msgmax = 65536
 
 # Controls the maximum shared segment size, in bytes
 kernel.shmmax = 4294967295
 
 # Controls the maximum number of shared memory segments, in pages
 kernel.shmall = 268435456
 
 net.ipv4.conf.default.send_redirects = 1
 net.ipv4.conf.all.send_redirects = 0
 
 
 ---
 
 and I downloaded :
 http://download.openvz.org/template/precreated/contrib/centos-5-i386-default.tar.gz
 
 what should I do next ?
 
 Thank you

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Kernel 2.6.18-openvz-13-39.1d1-amd64 oops

2007-11-06 Thread Kirill Korotaev
Frank,

can you please try this kernel?

Thanks,
Kirill

Thorsten Schifferdecker wrote:
 Hi Kirill, hi Frank,
 
 @Kirill:
 yes i found a misprint, which affected the debian linux tree only
 028stab045, done in my linux-patch-openvz.
 
 I can take a look between Ola's 028stab039 and my patches and mail the
 diff to Alex and you.
 
 @Frank:
 I've built the the OpenVZ kernels called fzakernel, it's the debian kernel
 config plus OpenVZ support, version 028stab048;
 see
 http://debian.systs.org/openvz/166/new-openvz-flavour-added-fzakernel-version-028stab481/
 
 Regards,
  Thorsten
 
 Am Mo, 29.10.2007, 17:34, schrieb Kirill Korotaev:
 
Thorsten,

Alexey Dobriyan told me that you said you found some misprint
in OVZ patch ported to debian.
Is it true? Can you please point to this? Any patch?

to Frank, can you try using 2.6.18-mainstream-OVZ kernel until this is
 
 resolved in Debian branch and thus confirm that it's purely
 
debian-kernel problem?

Thanks,
Kirill


E Frank Ball III wrote:

On Wed, Oct 24, 2007 at 06:31:39AM +0200, Martin Trtusek wrote:
  The same kernel on i386 is working without oops (uptime 22 days).
Should
  I fulfill a bug ?
 
  Unfortunately previous hardware are not available for test now (it is
in
  production, with 2.6.18-openvz-13-1etch4 kernel). Probably after
 
 next week we will have similar one for 1-2 week testing.
 
 
  Martin Trtusek
 
  Martin Trtusek pí?e v St 10. 10. 2007 v 07:54 +0200:
   I installed kernel 2.6.18-openvz-13-39.1d1-amd64 from
   http://download.openvz.org/debian on Debian Etch one week ago and
 
 experienced kernel oops (complete freezing, off/on necessary)
 after
 
2-3
   days of running (3 times). Oops is always after cron.daily scripts
(in
   my case 06:25) but not everyday. Yesterday I configured netconsole
for
   capturing useful info, enclosed.


I've seen three crashes with
linux-image-2.6.18-openvz-13-39.1d2-686_028.39.1d2_i386.deb

I changed my production server back to 2.6.18-openvz-12-1etch1-686.

I captured the output this time:

preparing to turn dcache accounting on, size 4294967293 pages,
watermarks 0 21800
UBC: turning dcache accounting on succeeded, usage 1236258, time 0.040
 
 [ cut here ]
 
kernel BUG at kernel/sched.c:3798!
invalid opcode:  [#1]
SMP
Modules linked in:  netconsole tcp_diag inet_diag hp100 nfs simfs
 
 vznetdev vzethdev vzrst ip_nat vzcpt ip_conntrack nfnetlink vzdquota
 vzmon vzdev xt_length ipt_ttl xt_tcpmss ipt_TCPMSS iptable_mangle
 iptable_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables
 x_tables nfsd exportfs lockd nfs_acl sunrpc ppdev lp ipv6 nls_iso8859_1
 isofs dm_snapshot dm_mirror dm_mod uhci_hcd ehci_hcd usb_storage
 ide_generic loop snd_cs46xx gameport snd_seq_dummy snd_seq_oss
 
snd_seq_midi snd_seq_midi_event snd_seq tsdev snd_rawmidi
 
 snd_seq_device snd_ac97_codec snd_ac97_bus snd_pcm_oss snd_mixer_oss
 i2c_piix4 snd_pcm i2c_core snd_timer parport_pc psmouse rtc serio_raw
 snd evdev soundcore snd_page_alloc shpchp pci_hotplug parport
 sworks_agp agpgart floppy pcspkr ide_floppy ext3 jbd mbcache sd_mod
 ide_cd cdrom ide_disk ohci_hcd usbcore aic7xxx scsi_transport_spi
 scsi_mod serverworks generic ide_core e100 mii processor
 
CPU:1, VCPU: -1.1
EIP:0060:[c01163a8]Not tainted VLI
EFLAGS: 00010046   (2.6.18-openvz-13-39.1d2-686 #1)
EIP is at rebalance_tick+0x2fa/0x485
eax: 005e   ebx: c035c6c0   ecx: 0008   edx: dfb05d94
esi: c2214180   edi: dfb99000   ebp: dfb05db0   esp: dfb05d64
ds: 007b   es: 007b   ss: 0068
Process swapper (pid: 0, veid: 0, ti=dfb04000 task=dfb01220
task.ti=dfb04000)
Stack:   dfb98000 dfb98000 330b1369 0001 0002
 
 0001
 
   dfb99000 1fb2c449 dfb99000 0003 00ff 005e 

   dfb01220 0001  c1f78da4 c012476b dfb05dd0 f524a414
0202
 Call Trace:
 [c012476b] update_process_times+0x52/0x5c
 [c010c9d2] smp_apic_timer_interrupt+0x9b/0xa1
 [c010342b] apic_timer_interrupt+0x1f/0x24
 [c0281338] _spin_unlock_irqrestore+0x8/0x9
 [c012d3f6] __wake_up_bit+0x29/0x2e
 [c016515e] end_buffer_async_write+0xe3/0x105
 [c014a30c] mempool_free+0x5f/0x63
 [c0164939] end_bio_bh_io_sync+0x0/0x39
 [c0164967] end_bio_bh_io_sync+0x2e/0x39
 [c016618d] bio_endio+0x50/0x55
 [c01a835d] __end_that_request_first+0x11b/0x425
 [f891f12d] scsi_end_request+0x1a/0xa9 [scsi_mod]
 [c014a30c] mempool_free+0x5f/0x63
 [f891f2ff] scsi_io_completion+0x143/0x2ed [scsi_mod]
 [f89553b2] sd_rw_intr+0x1eb/0x215 [sd_mod]
 [f891b3bd] scsi_finish_command+0x73/0x77 [scsi_mod]
 [c01a9f25] blk_done_softirq+0x4d/0x58
 [c01205aa] __do_softirq+0x84/0x109
 [c0120665] do_softirq+0x36/0x3a
 [c0104ea4] do_IRQ+0x8a/0x92
 [c010339a] common_interrupt+0x1a/0x20
 [c0101731] default_idle+0x0/0x59
 [c0101762] default_idle+0x31/0x59
 [c01017e8] cpu_idle+0x5e/0x74
Code:  0c 85 c0 0f 84 4e 01 00 00 53 89 c2 8b 4d b8 ff 75 e8 8b 45 dc
 
 e8 a7 cf ff ff 89 c3 58 85 db 5a 0f 84 31 01 00 00 39 7d d4 75 0b 0f
 0b 66 b8 d6 0e b8 a3 61

Re: [Users] problem run shell script by vzctl exec

2007-11-02 Thread Kirill Korotaev
yep. the actual command executed in 2nd case is:
server3:~# vzctl exec 180 echo /root

Kirill

On Fri, 2007-11-02 at 00:50 -0500, Drake Wilson wrote:
 Quoth hhding.gnu [EMAIL PROTECTED], on 2007-11-02 11:42:32 +0800:
  what's the problem?
  
  server3:~# vzctl exec 180 cat hi.sh
  #!/bin/bash
  
  echo $HOME
  server3:~# vzctl exec 180 hi.sh
  /
 
 Subprocess for bash starts running inside VE 180, begins executing
 hi.sh.  It's got a different environment, and expands $HOME into /,
 then echoes it to stdout.
 
  server3:~# vzctl exec 180 echo $HOME
  /root
 
 _Current_ shell expands $HOME into /root.  Subprocess for echo starts
 running inside VE 180, with the parameter already expanded.  (vzctl
 doesn't even see the $HOME.)  It is told to echo /root to stdout,
 so it does.
 
 Does that sound about right?
 
--- Drake Wilson
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Kernel 2.6.18-openvz-13-39.1d1-amd64 oops

2007-10-29 Thread Kirill Korotaev
Thorsten,

Alexey Dobriyan told me that you said you found some misprint
in OVZ patch ported to debian.
Is it true? Can you please point to this? Any patch?

to Frank, can you try using 2.6.18-mainstream-OVZ kernel until this
is resolved in Debian branch and thus confirm that it's purely debian-kernel 
problem?

Thanks,
Kirill


E Frank Ball III wrote:
 On Wed, Oct 24, 2007 at 06:31:39AM +0200, Martin Trtusek wrote:
   The same kernel on i386 is working without oops (uptime 22 days). Should
   I fulfill a bug ?
   
   Unfortunately previous hardware are not available for test now (it is in
   production, with 2.6.18-openvz-13-1etch4 kernel). Probably after next
   week we will have similar one for 1-2 week testing.
   
   Martin Trtusek
   
   Martin Trtusek pí?e v St 10. 10. 2007 v 07:54 +0200:
I installed kernel 2.6.18-openvz-13-39.1d1-amd64 from
http://download.openvz.org/debian on Debian Etch one week ago and
experienced kernel oops (complete freezing, off/on necessary) after 2-3
days of running (3 times). Oops is always after cron.daily scripts (in
my case 06:25) but not everyday. Yesterday I configured netconsole for
capturing useful info, enclosed.
 
 
 I've seen three crashes with
 linux-image-2.6.18-openvz-13-39.1d2-686_028.39.1d2_i386.deb 
 
 I changed my production server back to 2.6.18-openvz-12-1etch1-686.
 
 I captured the output this time:
 
 preparing to turn dcache accounting on, size 4294967293 pages, watermarks 0 
 21800
 UBC: turning dcache accounting on succeeded, usage 1236258, time 0.040
 [ cut here ]
 kernel BUG at kernel/sched.c:3798!
 invalid opcode:  [#1]
 SMP 
 Modules linked in:  netconsole tcp_diag inet_diag hp100 nfs simfs
 vznetdev vzethdev vzrst ip_nat vzcpt ip_conntrack nfnetlink vzdquota
 vzmon vzdev xt_length ipt_ttl xt_tcpmss ipt_TCPMSS iptable_mangle
 iptable_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables
 x_tables nfsd exportfs lockd nfs_acl sunrpc ppdev lp ipv6 nls_iso8859_1
 isofs dm_snapshot dm_mirror dm_mod uhci_hcd ehci_hcd usb_storage
 ide_generic loop snd_cs46xx gameport snd_seq_dummy snd_seq_oss
 snd_seq_midi snd_seq_midi_event snd_seq tsdev snd_rawmidi snd_seq_device
 snd_ac97_codec snd_ac97_bus snd_pcm_oss snd_mixer_oss i2c_piix4 snd_pcm
 i2c_core snd_timer parport_pc psmouse rtc serio_raw snd evdev soundcore
 snd_page_alloc shpchp pci_hotplug parport sworks_agp agpgart floppy
 pcspkr ide_floppy ext3 jbd mbcache sd_mod ide_cd cdrom ide_disk ohci_hcd
 usbcore aic7xxx scsi_transport_spi scsi_mod serverworks generic ide_core
 e100 mii processor
 CPU:1, VCPU: -1.1
 EIP:0060:[c01163a8]Not tainted VLI
 EFLAGS: 00010046   (2.6.18-openvz-13-39.1d2-686 #1) 
 EIP is at rebalance_tick+0x2fa/0x485
 eax: 005e   ebx: c035c6c0   ecx: 0008   edx: dfb05d94
 esi: c2214180   edi: dfb99000   ebp: dfb05db0   esp: dfb05d64
 ds: 007b   es: 007b   ss: 0068
 Process swapper (pid: 0, veid: 0, ti=dfb04000 task=dfb01220 task.ti=dfb04000)
 Stack:   dfb98000 dfb98000 330b1369 0001 0002 
 0001 
dfb99000 1fb2c449 dfb99000 0003 00ff 005e  
  
dfb01220 0001  c1f78da4 c012476b dfb05dd0 f524a414 
 0202 
  Call Trace:
  [c012476b] update_process_times+0x52/0x5c
  [c010c9d2] smp_apic_timer_interrupt+0x9b/0xa1
  [c010342b] apic_timer_interrupt+0x1f/0x24
  [c0281338] _spin_unlock_irqrestore+0x8/0x9
  [c012d3f6] __wake_up_bit+0x29/0x2e
  [c016515e] end_buffer_async_write+0xe3/0x105
  [c014a30c] mempool_free+0x5f/0x63
  [c0164939] end_bio_bh_io_sync+0x0/0x39
  [c0164967] end_bio_bh_io_sync+0x2e/0x39
  [c016618d] bio_endio+0x50/0x55
  [c01a835d] __end_that_request_first+0x11b/0x425
  [f891f12d] scsi_end_request+0x1a/0xa9 [scsi_mod]
  [c014a30c] mempool_free+0x5f/0x63
  [f891f2ff] scsi_io_completion+0x143/0x2ed [scsi_mod]
  [f89553b2] sd_rw_intr+0x1eb/0x215 [sd_mod]
  [f891b3bd] scsi_finish_command+0x73/0x77 [scsi_mod]
  [c01a9f25] blk_done_softirq+0x4d/0x58
  [c01205aa] __do_softirq+0x84/0x109
  [c0120665] do_softirq+0x36/0x3a
  [c0104ea4] do_IRQ+0x8a/0x92
  [c010339a] common_interrupt+0x1a/0x20
  [c0101731] default_idle+0x0/0x59
  [c0101762] default_idle+0x31/0x59
  [c01017e8] cpu_idle+0x5e/0x74
 Code:  0c 85 c0 0f 84 4e 01 00 00 53 89 c2 8b 4d b8 ff 75 e8 8b 45 dc e8
 a7 cf ff ff 89 c3 58 85 db 5a 0f 84 31 01 00 00 39 7d d4 75 0b 0f 0b
 66 b8 d6 0e b8 a3 61 29 c0 39 fb 89 5d d4 0f 84 16 01 00
 EIP: [c01163a8] rebalance_tick+0x2fa/0x485 SS:ESP 0068:dfb05d64
 Kernel panic - not syncing: Fatal exception in interrupt
 BUG: warning at arch/i386/kernel/smp.c:550/smp_call_function()
  [c010b093] smp_call_function+0x53/0xfe
  [c011bfce] vprintk+0x26/0x3a
  [c010b151] smp_send_stop+0x13/0x1c
  [c011b322] panic+0x4c/0xe2
  [c0103d59] die+0x252/0x269
  [c0104617] do_invalid_op+0x0/0x9d
  [c01046a8] do_invalid_op+0x91/0x9d
  [c01163a8] rebalance_tick+0x2fa/0x485
  [f8e3c87a] ipt_do_table+0x2a1/0x2cb [ip_tables]
  [c01b2b10] 

Re: [Users] Trying to compile ovz028stab0.45 (non-RedHat)

2007-10-24 Thread Kirill Korotaev
Dariush,

1. are the sources taken from .src.rpm or git?
2. plz attach your config so that we could check onsite.

Kirill

Dariush Pietrzak wrote:
 ..I get:
 kernel/pid.c: In function 'free_pid':
 kernel/pid.c:197: error: dereferencing pointer to incomplete type
 kernel/pid.c: In function 'alloc_pid':
 kernel/pid.c:233: error: dereferencing pointer to incomplete type
 make[2]: *** [kernel/pid.o] Error 1
 make[1]: *** [kernel] Error 2
 
 is this expected?

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Trying to compile ovz028stab0.45 (non-RedHat)

2007-10-24 Thread Kirill Korotaev
Thanks! will commit it today.

Thanks,
Kirill

Anton Gorlov wrote:
 Dariush Pietrzak пишет:
 
Dariush,
1. are the sources taken from .src.rpm or git?

 I downloaded this:
 http://download.openvz.org/kernel/branches/2.6.18/028stab045.1/patches/patch-ovz028stab045.1-combined.gz
and applied it to clean 2.6.18 ( not 2.6.18.8, as previous patch had such
requirement ) 
 
 http://bugzilla.openvz.org/show_bug.cgi?id=689
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ kernel based on RHEL5 kernel-2.6.18-8.1.15.el5.x86_64.rpm?

2007-10-23 Thread Kirill Korotaev
I've put it prelimenary here:
http://download.openvz.org/~dev/028stab047.1/

will be publicly available tommorrow after minimal testing.

Thanks,
Kirill

Mark A. Schwenk wrote:
 Red Hat has released a new RHEL5 kernel with important security updates. 
 Will the OpenVZ team be building a new OpenVZ kernel based on RHEL5 
 kernel-2.6.18-8.1.15.el5.x86_64.rpm?
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Unable to start VPS on Debian Etch

2007-10-18 Thread Kirill Korotaev
You are welcome!

Kirill

Roberto Mello wrote:
 On 10/16/07, Kirill Korotaev [EMAIL PROTECTED] wrote:
 
plz check that OVZ tools: vzctl and vzquota are 64 bit versions as well.
looks like they are 32bit. they have to be of the same arch as kernel.
 
 
 Well, darn it. I should have thought of that, but I didn't. That was
 exactly the problem.
 
 Problem solved. Thanks a lot Kirill!
 
 -Roberto
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: Fwd: [Users] boot error - unable to mount root fs on unknown-block(0, 0)

2007-10-18 Thread Kirill Korotaev
Ian,

just like to any other usual machine using
ssh from your workstation.

1. assign IP address to some VE using
ve0# vzctl set VEID --ipadd VEIP --save

2. just in case, check that VE is pingable from your workstation:
ws# ping VEIP

3. just in case, check that VE is running sshd service:
ve0# vzctl exec VEID ps axf | grep sshd

if it is not running sshd then enter to VE using vzctl enter command
and install/start sshd service.

4. don't forget to set root user password
ve0: vzctl set VEID --userpasswd root:mypassword

5. now you can login to VE as to usual machine using it's IP

Kirill


Ian jonhson wrote:
 -- Forwarded message --
 From: Ian jonhson [EMAIL PROTECTED]
 Date: Oct 18, 2007 2:17 PM
 Subject: Re: [Users] boot error - unable to mount root fs on unknown-block(0, 
 0)
 To: [EMAIL PROTECTED]
 
 
 Thank you very much!
 
 I have created my own VE, however how can I login VE by ssh? I used
 the IP setting described in
 http://wiki.openvz.org/Installation_on_Debian.
 
 Thanks again,
 
 Ian
 
 On 10/17/07, E Frank Ball III [EMAIL PROTECTED] wrote:
 
On Wed, Oct 17, 2007 at 12:20:01PM +0800, Ian jonhson wrote:
 Where can I get the pre-built kernel image?
  
   http://download.openvz.org/debian/dists/etch/main/binary-i386/base/
  
   In your sources list add:
  
   deb http://download.openvz.org/debian etch main
  
 
  I added the line in source.list, but apt-cache search said it can not
  open the website.



apt-cache search linux-image-2.6.18-openvz shows the openvz kernels
for me.



   I'm using linux-image-2.6.18-openvz-13-39.1d2-686_028.39.1d2_i386.deb
  
 
  I opened the link given above and found the image file, but I don't
  know how to use it after download the image file.
 
  Could you give me some advices?


If you manually downloaded it then install it with
dpkg -i linux-image-2.6.18-openvz-13-39.1d2-686_028.39.1d2_i386.deb


--

   E Frank Ball[EMAIL PROTECTED]

 
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Unable to start VPS on Debian Etch

2007-10-16 Thread Kirill Korotaev
Roberto Mello wrote:
 I am unable to start a VPS under a Debian Etch host. I can't find any
 useful message as to why it's failing to start the VPS on
 /var/log/vzctl.log.
 
 The machine is a dual quad-core Xeon (Intel(R) Xeon(R) CPU E5345 @
 2.33GHz). I used the debian-4.0-i386-minimal.tar.gz template from
 openvz.org to create the VE.
 
 Here's what I get when I try to start the VE:
 
 michael:/etc/vz# vzctl --verbose start 101
 Unable to open /usr/lib/vzctl/modules/: No such file or directory
 Starting VPS ...
 Mounting root: /srv/vz/root/101 /srv/vz/private/101
 VPS is mounted
 VPS start failed
 Running: /usr/sbin/vzquota stat 101 -f
 VPS is unmounted

Can you please check /var/log/messages as well? any messages there?
if not - need strace of this command like this:
# strace -fF -o out vzctl --verbose start 101

output will be collected in out file.

 Kernel:
 
 Linux michael 2.6.18-openvz-13-39.1d1-amd64 #1 SMP Sat Sep 29 15:02:55
 MSD 2007 x86_64 GNU/Linux
 
 I grabbed the kernel from
 http://download.openvz.org/kernel/debian/etch/ and put it in place
 manually because the package wouldn't install because I am running a
 32-bit userland, but wanted the 64 bit kernel. It did kernel panic for
 me yesterday when trying to do something on OpenVZ (I can't remember
 what it was unfortunately).
 
 And here are the VZ modules I have loaded. /etc/init.d/vz has been
 started successfully:
 
 michael:/etc/vz# lsmod | grep vz
 vznetdev 26752 1
 vzethdev 17040 0
 vzrst 129832 0
 ip_nat 24720 1 vzrst
 vzcpt 109368 0
 ip_conntrack 72596 3 vzrst,ip_nat,vzcpt
 vzdquota 47464 0 [permanent]
 vzmon 50448 4 vznetdev,vzethdev,vzrst,vzcpt
 vzdev 8456 4 vznetdev,vzethdev,vzdquota,vzmon
 ipv6 298208 29 vzrst,vzcpt,vzmon
 
 I don't know what else to look for. Any help would be greatly appreciated.

simfs module should be loaded as well.

Thanks,
Kirill
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Unable to start VPS on Debian Etch

2007-10-16 Thread Kirill Korotaev
plz check that OVZ tools: vzctl and vzquota are 64 bit versions as well.
looks like they are 32bit. they have to be of the same arch as kernel.

Thanks,
Kirill


Roberto Mello wrote:
 On 10/16/07, Kirill Korotaev [EMAIL PROTECTED] wrote:
 
Can you please check /var/log/messages as well? any messages there?
 
 
 Nope.
 
 
if not - need strace of this command like this:
# strace -fF -o out vzctl --verbose start 101
 
 
 I tried stracing, but nothing obvious jumped out at me. Maybe it will
 to someone else. After VPS is mounted, I get:
 
 13095 read(9,  unfinished ...
 13096 dup2(10, 0)   = 0
 13096 close(9)  = 0
 13096 close(10) = 0
 13096 fcntl64(0, F_SETFD, FD_CLOEXEC)   = 0
 13096 fcntl64(8, F_SETFD, FD_CLOEXEC)   = 0
 13096 close(7)  = 0
 13096 fcntl64(5, F_SETFD, FD_CLOEXEC)   = 0
 13096 close(6)  = 0
 13096 chdir(/srv/vz/root/101) = 0
 13096 chroot(/srv/vz/root/101)= 0
 13096 setsid()  = 13096
 13096 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
 13096 rt_sigaction(SIGHUP, {SIG_DFL}, NULL, 8) = 0
 13096 rt_sigaction(SIGINT, {SIG_DFL}, NULL, 8) = 0
 13096 rt_sigaction(SIGQUIT, {SIG_DFL}, NULL, 8) = 0
 
  a bunch more of these rt_sigaction 
 
 13096 rt_sigaction(65, {SIG_DFL}, NULL, 8) = -1 EINVAL (Invalid argument)
 13096 syscall_511(0x65, 0xbf9ec790, 0xbf9ec790, 0xbf9ebe88,
 0xb7f76100, 0xbf9ebe48, 0xffda, 0x7b, 0x7b, 0, 0x33, 0x1ff,
 0xb7f79792, 0x73, 0x246, 0xbf9ebe08, 0x7b, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 0, 0, 0, 0, 0, 0) = -1 (errno 38)
 13096 write(0, \22\0\0\0, 4 unfinished ...
 13095 ... read resumed \22\0\0\0, 4) = 4
 13096 ... write resumed ) = 4
 13095 close(10) = -1 EBADF (Bad file descriptor)
 13095 close(9)  = 0
 13095 rt_sigaction(SIGCHLD, {SIG_DFL}, NULL, 8) = 0
 13095 write(2, VPS start failed, 16)  = 16
 
 
simfs module should be loaded as well.
 
 
 It is.
 
 Thanks. Much appreciated.
 
 -Roberto
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] VPS capabilities

2007-10-10 Thread Kirill Korotaev
Dietmar Maurer wrote:
 Where can I find more information about vps capabilities, i.e. what
 exactly is:
 
 NET_BIND_SERVICE
 KILL
 LINUX_IMMUTABLE
 NET_ADMIN
 SYS_CHROOT

these are std linux capabilities, so you can look at any documentation related 
to it,
plus comments in kernel in include/linux/capability.h and kernel sources.

 VE_ADMIN

it is a restricted subset of CAP_SYS_ADMIN+CAP_NET_ADMIN capability for VE root.
it allows to do a lot of thing allowed for std root, like configuring firewalls,
network devices, etc. but not everything, e.g. VE root can't change mtrr 
registers,
can't issue raw SCSI commands, etc.

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: AW: AW: [Users] VPS capabilities

2007-10-10 Thread Kirill Korotaev
Dietmar Maurer wrote:
  
 
 
Most likely there answer is - possible, but not easily.
vzctl requires access to some of vps files, global configs, 
ve configs etc. Theoretically it can be fixed and adopted 
(e.g. to have 2 global configs: one in VE0 for admin VPS 
 
 
 or also do a bind mount for /etc/vz/ ?

yep.

start and one in admin VPS; files from all VEs can also be 
accessiable via bind mount to admin VE), but on practice no 
one tried it.
 
 
 I guess i will try it out ;-)

one bigger problem - networking setup (e.g. routes) in VE0 :/

Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Running DHCP on VPS, ( on a router.. )

2007-10-01 Thread Kirill Korotaev
Dariush Pietrzak wrote:
It should print some information in /var/log/messages about what packets
are dropped and due to which condition.
 
  All the messages look like this:
 
 Sep 30 13:31:11 dfw1 kernel: veth_xmit() dropped pkt reason 4:
 Sep 30 13:31:11 dfw1 kernel:   src = 00:1b:d5:84:90:d2:, dst = 
 00:08:02:ac:36:20:
  src mac seem to belong to the stations that my dhcpd is supposed to be
 serving, I don't recognize dst macs though... I'll inwestigate it if that's
 important.

dst MAC is MAC of your host node according to tcpdump you've sent a few minutes 
ago.
So pkt was dropped correctly.

  To sum up - all the messages claim that drop is due to reason 4
 
 
+ reason = 4;
  if (compare_ether_addr(((struct ethhdr *)skb-data)-h_dest,
  rcv-dev_addr))
  goto outf;
 
 
  I don't recall if I've mentioned this, but I'm using both bridge, and
 ucarp on top of the bridge.

can you check w/o ucarp please?
We setup vlan+bridge+veth to check, but it works fine on our side.
Or maybe you can give me an access to check?

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Running DHCP on VPS, ( on a router.. )

2007-09-30 Thread Kirill Korotaev
oh, sorry, wrong patch :/
I've attached new debug patch, please check it.
It should print some information in /var/log/messages about what packets
are dropped and due to which condition.

Thanks a lot for your help!
Kirill
P.S. you can use in production kernel with removed filtering in veth_xmit() 
until it is resolved.


Dariush Pietrzak wrote:
Can you please revert previous patch and apply the one I attached?
Does it help?
 
  it crashes on boot on machine with veth-using vps, (clean machine boots
 ok though).
  Screenshot attached
 
 
 
 
 

--- ./drivers/net/veth.c.ve2346 2007-09-30 11:26:01.0 +0400
+++ ./drivers/net/veth.c2007-09-30 11:30:45.0 +0400
@@ -282,22 +282,26 @@ static int veth_xmit(struct sk_buff *skb
struct net_device *rcv = NULL;
struct veth_struct *entry;
int length;
+   int reason = 0;
 
stats = veth_stats(dev, smp_processor_id());
if (unlikely(get_exec_env()-disable_net))
goto outf;
 
+   reason = 1;
entry = veth_from_netdev(dev);
rcv = entry-pair;
if (!rcv)
/* VE going down */
goto outf;
 
+   reason = 2;
if (!(rcv-flags  IFF_UP)) {
/* Target VE does not want to receive packets */
goto outf;
}
 
+   reason = 3;
if (unlikely(rcv-owner_env-disable_net))
goto outf;
/* Filtering */
@@ -309,12 +313,14 @@ static int veth_xmit(struct sk_buff *skb
if (is_multicast_ether_addr(
((struct ethhdr *)skb-data)-h_dest))
goto out;
+   reason = 4;
if (compare_ether_addr(((struct ethhdr *)skb-data)-h_dest,
rcv-dev_addr))
goto outf;
} else if (!ve_is_super(dev-owner_env) 
!entry-allow_mac_change) {
/* from VE to VE0 */
+   reason = 5;
if (compare_ether_addr(((struct ethhdr *)skb-data)-h_source,
dev-dev_addr))
goto outf;
@@ -361,6 +367,23 @@ out:
return 0;
 
 outf:
+   {
+   unsigned char *addr;
+   int i;
+
+   printk(veth_xmit() dropped pkt reason %d:\n, reason);
+
+   addr = ((struct ethhdr *)skb-data)-h_source;
+   printk(  src = );
+   for (i = 0; i  ETH_ALEN; i++)
+   printk(%02x:, addr[i]);
+
+   addr = ((struct ethhdr *)skb-data)-h_dest;
+   printk(, dst = );
+   for (i = 0; i  ETH_ALEN; i++)
+   printk(%02x:, addr[i]);
+   printk(\n);
+   }
kfree_skb(skb);
stats-tx_dropped++;
return 0;
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Running DHCP on VPS, ( on a router.. )

2007-09-30 Thread Kirill Korotaev
Dariush Pietrzak wrote:
It should print some information in /var/log/messages about what packets
are dropped and due to which condition.
 
  All the messages look like this:
 
 Sep 30 13:31:11 dfw1 kernel: veth_xmit() dropped pkt reason 4:
 Sep 30 13:31:11 dfw1 kernel:   src = 00:1b:d5:84:90:d2:, dst = 
 00:08:02:ac:36:20:
  src mac seem to belong to the stations that my dhcpd is supposed to be
 serving, I don't recognize dst macs though... I'll inwestigate it if that's
 important.

it should be broadcast MAC, not this one.
Can you please check whose MAC is it?
you can you arping for this.


  To sum up - all the messages claim that drop is due to reason 4
 
 
+ reason = 4;
  if (compare_ether_addr(((struct ethhdr *)skb-data)-h_dest,
  rcv-dev_addr))
  goto outf;
 
 
  I don't recall if I've mentioned this, but I'm using both bridge, and
 ucarp on top of the bridge.

Yes, most likely ucarp is the reason... I need some reading to understand what 
it is...
So would be nice if we resolve this and fixed.

Thanks,
Kirill
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ kernel based on RHEL5 kernel-2.6.18-8.1.14.el5.x86_64.rpm

2007-09-29 Thread Kirill Korotaev
I will try to prepare it on Monday.

Mark A. Schwenk wrote:
 The latest OpenVZ RHEL5 kernel for x86_64 available for download from 
 http://openvz.org/download/kernel/rhel5/ is 
 ovzkernel-2.6.18-8.1.8.el5.028stab039.1.x86_64.rpm.
 
 What is the best way to obtain a newer OpenVZ kernel based on RHEL5 
 kernel-2.6.18-8.1.14.el5.x86_64.rpm?
 ___
 Users mailing list
 Users@openvz.org
 https://openvz.org/mailman/listinfo/users
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Running DHCP on VPS, ( on a router.. )

2007-09-28 Thread Kirill Korotaev
DHCP server should work fine with veth bridged to host eth0 interface.
Can you reproduce your issue when server doesn't reply?

Delegating interface to the VE is always exclusive, though you can add 2nd 
network
adapter connected to the same network.

Thanks,
Kirill


Dariush Pietrzak wrote:
 Hello,
  I'm having problems with setting up a dhcp server on vps, I figured out 
 I need to create veth device, and then bridge it to the eth0.107 interface
 that I would like to serve dhcp on. 
  And it sometimes works, but more often then not my dhcp is ignoring
 requests, not to mention - when I put up a second dhcp on way slower
 physical machine, it manages to always respond first, even though it's few
 switch hops further away..
 
  So I figured I'd give this VPS a physical interface to work with, but when
 I do that, that device disappears from the HN, which is not very good since
 I'm supposed to be routing packets coming from this interface.
 
  Is there a way to stop such physical IF from disappearing from the HN? I
 do understand security implications of such setup, I need this to work.
 
 Here's my setup:
 on HN, debian's /etc/network/interfaces:
 
 auto br107
 iface br107 inet static
   bridge_ports eth0.107
   bridge_maxwait 3
   address 192.168.0.0
   netmask 255.255.255.0
   network 192.168.0.0
   broadcast 192.168.0.255
 
 the same on VPS:
 auto eth0
 iface eth0 inet static
 address 192.168.0.251
   netmask 255.255.255.0
   network 192.168.0.0
   broadcast 192.168.0.255
   gateway 192.168.0.254
 
 
 vps.conf:
 
 NETIF=ifname=eth0,mac=00:18:51:6F:E6:41,host_ifname=veth107.0,host_mac=00:18:51:1C:F7:1D
 
 firewallOne:/home/eyck# less /etc/vz/vznet.conf 
 #!/bin/sh
 
 EXTERNAL_SCRIPT=/etc/vz/add-bridges
 firewallOne:/home/eyck# less /etc/vz/add-bridges 
 #!/bin/sh
 brctl addif br107 veth107.0
 ifconfig veth107.0 up
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Running DHCP on VPS, ( on a router.. )

2007-09-28 Thread Kirill Korotaev
Dariush Pietrzak wrote:
DHCP server should work fine with veth bridged to host eth0 interface.
Can you reproduce your issue when server doesn't reply?
 
  it looks like this:
 
 HN: tcpdump -n -i eth0.107

is it veth pair interface, right? ok...
and is 192.168.8.254 assigned to veth inside VE?

 08:16:19.401880 00:1b:d4:7e:76:2a  01:00:0c:cc:cc:cd SNAP Unnumbered, ui, 
 Flags [Command], length 50
 08:16:21.154240 IP 0.0.0.0.68  255.255.255.255.67: BOOTP/DHCP, Request from 
 00:1b:d5:2c:bf:38, length 308
 08:16:21.185096 IP 192.168.8.254.67  255.255.255.255.68: BOOTP/DHCP, Reply, 
 length 300
 08:16:21.187344 arp who-has 192.168.9.254 tell 192.168.9.97
 
 
 HN: tcpdump -n -i br107
 08:16:19.401880 00:1b:d4:7e:76:2a  01:00:0c:cc:cc:cd SNAP Unnumbered, ui, 
 Flags [Command], length 50
 08:16:21.154240 IP 0.0.0.0.68  255.255.255.255.67: BOOTP/DHCP, Request from 
 00:1b:d5:2c:bf:38, length 308
 08:16:21.185096 IP 192.168.8.254.67  255.255.255.255.68: BOOTP/DHCP, Reply, 
 length 300
 08:16:21.187344 arp who-has 192.168.9.254 tell 192.168.9.97
 
 and finally, from inside the vps:
 VPS: tcpdump -n -i eth0
 08:16:19.401886 00:1b:d4:7e:76:2a  01:00:0c:cc:cc:cd SNAP Unnumbered, ui, 
 Flags [Command], length 50
 08:16:21.185110 IP 192.168.8.254.67  255.255.255.255.68: BOOTP/DHCP, Reply, 
 length 300
 08:16:21.187356 arp who-has 192.168.9.254 tell 192.168.9.97
 08:16:21.188600 arp who-has 192.168.8.1 tell 192.168.9.97

I wonder why timestamp of Reply here is greater then timestamp in HN...
Maybe... it was reply from someone else? This would make things logical - 
someone replies
to DHCP requests and you see the reply only. Can you please check (if it is 
true - shutting down
dhcp server inside VE won't affect tcpdump output)?

  as you can see, broadcast request from dhcp client is missing, but strangely 
 enough, the reply
 is clearly visible.

Or it works, but only tcpdump is missing the packet?

  I also tried doing something like this to verify the software inside VPS: I 
 chroot to /vz/private/107/
 and then launch the dhcpd against br107, and then it works as expected.

Thanks,
Kirill

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


Re: [Users] Running DHCP on VPS, ( on a router.. )

2007-09-28 Thread Kirill Korotaev
is it possible to get an access to your node to check?
if so please send me crentials privately.
also, if I will provide you a patch for testing will you be able to rebuild the 
kernel
and check?

Thanks,
Kirill

Dariush Pietrzak wrote:
 On Fri, 28 Sep 2007, Vitaliy Gusev wrote:
 
255.255.255.255.67: BOOTP/DHCP, Request from 00:1b:d5:2c:bf:38, length 308

Here is request.
 
  Yes.
 
 
HN: tcpdump -n -i br107
255.255.255.255.67: BOOTP/DHCP, Request from 00:1b:d5:2c:bf:38, length 308

Here is too.
 
  Yes.
 
 
08:16:19.401886 00:1b:d4:7e:76:2a  01:00:0c:cc:cc:cd SNAP Unnumbered, ui,
Flags [Command], length 50 08:16:21.185110 IP 192.168.8.254.67 

And here.
 
  Ooh? where?
 
 
255.255.255.255.68: BOOTP/DHCP, Reply, length 300 08:16:21.187356 arp
 
  this is only a reply, this came from the other dhcp server, that I had to
 ask to be set up few segments away from here...
 
 

___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users


  1   2   >