Re: [pve-devel] pveproxy issue

2014-10-16 Thread Dietmar Maurer
I don't have a reliable way to reproduce the behavior. Students connect and pveproxy hangs after a relative short time. What are they doing? Same thing at same time? All connect to same server? All open a noVNC console? All allocates some storage? ...?

Re: [pve-devel] pveproxy issue

2014-10-16 Thread Guy Loesch
Yes, all the students work on the same server, but that is exactly the same machine with the same users and pool setup as last year, and at that time I had 15 students working concurrently. Students create a Centos-KVM at the same time and then open a noVNC console to setup the machine. Accessing

Re: [pve-devel] pveproxy issue

2014-10-16 Thread Dietmar Maurer
Students create a Centos-KVM at the same time and then open a noVNC console to setup the machine. Accessing the machines through noVNC or by remote desktop is no problem. Please can you test if the problem shows up if you use the traditional java console (and do not use noVNC)?

[pve-devel] GlusterFS transport option

2014-10-16 Thread Stoyan Marinov
Hello, I'm running Proxmox PVE with GlusterFS storage over Infiniband and have noticed that the way proxmox assigns drives to virtual machines over glusterfs (-drive file=gluster://..) causes qemu and glusterfs to communicate over TCP. This is not only slower but also more CPU intensive for

Re: [pve-devel] GlusterFS transport option

2014-10-16 Thread Dietmar Maurer
I was thinking of (actually implemented it on my servers) adding a transport option to storage.cfg. In my implementation transport is optional and if not specified defaults to the current behaviour. Possible values are tcp, rdma or unix (haven't tested the latter though). So my question

Re: [pve-devel] GlusterFS transport option

2014-10-16 Thread Stoyan Marinov
Should be something like this (not a git master): --- PVE/Storage/GlusterfsPlugin.pm | 13 - 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm index ee70603..a6f4024 100644 --- a/PVE/Storage/GlusterfsPlugin.pm

Re: [pve-devel] GlusterFS transport option

2014-10-16 Thread Dietmar Maurer
comments inline -Original Message- From: Stoyan Marinov [mailto:sto...@marinov.us] Sent: Donnerstag, 16. Oktober 2014 17:16 To: Dietmar Maurer Cc: pve-devel@pve.proxmox.com Subject: Re: [pve-devel] GlusterFS transport option Should be something like this (not a git master): ---

Re: [pve-devel] GlusterFS transport option

2014-10-16 Thread Stoyan Marinov
On Oct 16, 2014, at 6:52 PM, Dietmar Maurer diet...@proxmox.com wrote: looks strange to me. how is unix://$server/$glustervolume/images/$vmid/$name related to gluster? Yeah, same here, but from the qemu/kvm man page: GlusterFS GlusterFS is an user space distributed file system.

[pve-devel] [PATCH] Add transport option for glusterfs storage

2014-10-16 Thread Stoyan Marinov
Signed-off-by: Stoyan Marinov sto...@marinov.us --- PVE/Storage/GlusterfsPlugin.pm | 14 +- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm index ee70603..36322b1 100644 --- a/PVE/Storage/GlusterfsPlugin.pm

Re: [pve-devel] GlusterFS transport option

2014-10-16 Thread Stoyan Marinov
I'm getting better with git :) On Oct 16, 2014, at 6:52 PM, Dietmar Maurer diet...@proxmox.com wrote: comments inline -Original Message- From: Stoyan Marinov [mailto:sto...@marinov.us] Sent: Donnerstag, 16. Oktober 2014 17:16 To: Dietmar Maurer Cc: pve-devel@pve.proxmox.com

Re: [pve-devel] GlusterFS transport option

2014-10-16 Thread Dietmar Maurer
           Syntax for specifying a VM disk image on GlusterFS volume is                     gluster[+transport]://[server[:port]]/volname/image[?socket=...] I'll prep a new patch with the enum thingie :) That is gluster+unix://, not unix: git it?

[pve-devel] backup ceph high iops and slow

2014-10-16 Thread VELARTIS Philipp Dürhammer
Why do backups with ceph make so high iops? I get around 600 iops for 40mb/sec which is by the way very slow for a backup. When I make a disk clone from local to ceph I get 120mb/sec (which is the network limit from the old proxmox nodes) and only around 100-120 iops which is the normal for a

[pve-devel] ZFS: Implement support for ctld

2014-10-16 Thread Michael Rasmussen
Hi all, I have started the implementation of the support for the native FreeBSD iscsi target in FreeBSD 10. From what I have been reading this implementation should solve all prior issues which is found in istgt and thereby make it suitable for enterprise usage. -- Hilsen/Regards Michael