I don't have a reliable way to reproduce the behavior. Students connect and
pveproxy hangs after a relative short time.
What are they doing? Same thing at same time? All connect to same server? All
open a noVNC console?
All allocates some storage? ...?
Yes, all the students work on the same server, but that is exactly the
same machine with the same users and pool setup as last year, and at
that time I had 15 students working concurrently.
Students create a Centos-KVM at the same time and then open a noVNC
console to setup the machine. Accessing
Students create a Centos-KVM at the same time and then open a noVNC console
to setup the machine. Accessing the machines through noVNC or by remote
desktop is no problem.
Please can you test if the problem shows up if you use the traditional java
console (and do not use noVNC)?
Hello,
I'm running Proxmox PVE with GlusterFS storage over Infiniband and have noticed
that the way proxmox assigns drives to virtual machines over glusterfs (-drive
file=gluster://..) causes qemu and glusterfs to communicate over TCP. This is
not only slower but also more CPU intensive for
I was thinking of (actually implemented it on my servers) adding a transport
option to storage.cfg. In my implementation transport is optional and if not
specified defaults to the current behaviour. Possible values are tcp, rdma or
unix
(haven't tested the latter though).
So my question
Should be something like this (not a git master):
---
PVE/Storage/GlusterfsPlugin.pm | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
index ee70603..a6f4024 100644
--- a/PVE/Storage/GlusterfsPlugin.pm
comments inline
-Original Message-
From: Stoyan Marinov [mailto:sto...@marinov.us]
Sent: Donnerstag, 16. Oktober 2014 17:16
To: Dietmar Maurer
Cc: pve-devel@pve.proxmox.com
Subject: Re: [pve-devel] GlusterFS transport option
Should be something like this (not a git master):
---
On Oct 16, 2014, at 6:52 PM, Dietmar Maurer diet...@proxmox.com wrote:
looks strange to me. how is
unix://$server/$glustervolume/images/$vmid/$name related to gluster?
Yeah, same here, but from the qemu/kvm man page:
GlusterFS
GlusterFS is an user space distributed file system.
Signed-off-by: Stoyan Marinov sto...@marinov.us
---
PVE/Storage/GlusterfsPlugin.pm | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
index ee70603..36322b1 100644
--- a/PVE/Storage/GlusterfsPlugin.pm
I'm getting better with git :)
On Oct 16, 2014, at 6:52 PM, Dietmar Maurer diet...@proxmox.com wrote:
comments inline
-Original Message-
From: Stoyan Marinov [mailto:sto...@marinov.us]
Sent: Donnerstag, 16. Oktober 2014 17:16
To: Dietmar Maurer
Cc: pve-devel@pve.proxmox.com
Syntax for specifying a VM disk image on GlusterFS volume is
gluster[+transport]://[server[:port]]/volname/image[?socket=...]
I'll prep a new patch with the enum thingie :)
That is gluster+unix://, not unix:
git it?
Why do backups with ceph make so high iops?
I get around 600 iops for 40mb/sec which is by the way very slow for a backup.
When I make a disk clone from local to ceph I get 120mb/sec (which is the
network limit from the old proxmox nodes) and only around 100-120 iops which is
the normal for a
Hi all,
I have started the implementation of the support for the native FreeBSD
iscsi target in FreeBSD 10.
From what I have been reading this implementation should solve all
prior issues which is found in istgt and thereby make it suitable for
enterprise usage.
--
Hilsen/Regards
Michael
13 matches
Mail list logo