};
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER aderum...@odiso.com, Stefan Priebe - Profihost AG
s.pri...@profihost.ag
Cc: pve-devel@pve.proxmox.com
Envoyé: Mercredi 29 Août 2012 07:30:26
Objet: RE: [pve-devel] adding cdrom fails
Subject: Re: [pve-devel] adding
] On Behalf Of Dietmar Maurer
Sent: Donnerstag, 30. August 2012 07:05
To: Stefan Priebe; pve-devel@pve.proxmox.com
Subject: Re: [pve-devel] [PATCH] - preserve authorized_key key order -
identify double keys by key and not by comment
+my @lines = split(/\n/, $data);
+foreach my $line
Am 30.08.2012 10:57, schrieb Dietmar Maurer:
Do we also like to check for keys without comments?
if ($line =~ m/^ssh-rsa\s+(\S+)/) {
ssh allows that? If so, we should also allow that.
man page says it is optional comment.
Stefan
___
pve-devel
Does Proxmox support creating private virtual bridges?
See:
private virtual bridge:
http://www.linux-kvm.org/page/Networking
Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
on the libiscsi todo is a reconnect part:
* Autoconnect for session faiulures.
When the tcp session fail, try several times to reconnect and relogin.
If successful re-issue any commands that were in flight.
When this is introduces it should be pretty simple to add a patch which
round
Hello list,
i would like to store custom key value paris for a VM. Is this possible
through the API?
I tried via
pve:/nodes/myhost/qemu/110 set config ABC EFG
400 wrong number of arguments
but i tried the same with socket:
pve:/nodes/myhost/qemu/110 set config sockets 1
400 wrong number of
Am 24.09.2012 10:50, schrieb Dietmar Maurer:
I tried to improve snapshot behavior a bit:
- take snapshot is now a background task (does not block monitor)
- try to save VM state while VM is online
So downtime during snapshot should be shorter now, especially if the VM
use much RAM.
Feel free
Am 25.10.2012 16:24, schrieb Dietmar Maurer:
while testing xbzrle i've seen it several times, that a migration fails
We disable xbzlre because it is buggy ...
But it was the PVE code which stopped this VM after failed migration. Or
do you mean without xbzlre migration will NEVER fail?
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
From: Stefan Priebe g...@profihost.ag
Signed-off-by: root r...@neuerserver.de-nserver.de.de-nserver.de
---
PVE/API2/Qemu.pm |8 ++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8c4c323..ce1f6c5 100644
--- a/PVE/API2/Qemu.pm
From: Stefan Priebe g...@profihost.ag
Signed-off-by: root r...@neuerserver.de-nserver.de.de-nserver.de
---
PVE/API2/OpenVZ.pm |6 +-
1 files changed, 5 insertions(+), 1 deletions(-)
diff --git a/PVE/API2/OpenVZ.pm b/PVE/API2/OpenVZ.pm
index d609167..846021a 100644
--- a/PVE/API2
From: Stefan Priebe g...@profihost.ag
Signed-off-by: root r...@neuerserver.de-nserver.de.de-nserver.de
---
PVE/API2/OpenVZ.pm |9 ++---
1 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/PVE/API2/OpenVZ.pm b/PVE/API2/OpenVZ.pm
index 846021a..47d7d70 100644
--- a/PVE/API2
From: Stefan Priebe g...@profihost.ag
Signed-off-by: root r...@neuerserver.de-nserver.de.de-nserver.de
---
PVE/Storage.pm | 13 -
1 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 271b3f9..a5f4d83 100755
--- a/PVE/Storage.pm
+++ b
From: Stefan Priebe g...@profihost.ag
Signed-off-by: root r...@neuerserver.de-nserver.de.de-nserver.de
---
PVE/API2/Nodes.pm | 11 +++
1 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index c2d9166..b1d505e 100644
--- a/PVE/API2
Am 30.10.2012 18:14, schrieb Dietmar Maurer:
I think it's here :
/usr/share/perl5/PVE/Cluster.pm
sub create_rrd_graph {
I guess I have also seen that.
The big plan is to use ExtJS graph feature to display rrd data. But
that is a considerable amount of work (debug ExtJS code).
nice but i
[mailto:pve-devel-
boun...@pve.proxmox.com] On Behalf Of Stefan Priebe - Profihost AG
Sent: Donnerstag, 01. November 2012 14:23
To: pve-devel@pve.proxmox.com
Subject: [pve-devel] KVM live migration limited to SSH speed
Hello list,
right now with the latest patches, a tuned ciphers list and openssl aes
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
data/PVE/Cluster.pm |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
index 3387fc8..78a7006 100644
--- a/data/PVE/Cluster.pm
+++ b/data/PVE/Cluster.pm
@@ -1021,9 +1021,8
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
.../eglibc-2.11.3/debian/control |2 +-
.../eglibc-2.11.3/debian/patches/series|2 +
.../debian/patches/synfs_support.patch | 129
3 files changed, 132 insertions(+), 1
Am 07.11.2012 15:13, schrieb Corin Langosch:
Stefan Priebe - Profihost AG s.priebe@... writes:
Me scsi-generic with virio-scsi is perfect for iscsi - much better
than virtio. But yes you're correct.
I'll do some tests - imho it is only supported by scsi-hd - scsi-block
doesn't know
Hello list,
even since xzrble is now disabled i've seen again never ending migration
(waited 30 minutes).
CPU is just 15% while migrating. Only MySQL is running.
Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
overkill... but SSDs are near
cheap as SAS drives...
Did you have tested with gigabit link with bigger latencies ?
No. But i hope i can compare Arista 10G against HP 10G soon.
Stefan
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER
... but each node has a
capacity of 360.000 iops... perhaps a bit overkill... but SSDs are near
cheap as SAS drives...
So, If you launch fio from differents kvm guest on differents kvm host, you are
limited by the ceph cluster speed ?
- Mail original -
De: Stefan Priebe s.pri
Hello list,
as qemu-kvm 1.3 will be qemu.git 1.3 as they were merged. I was trying
to port all patches of pve to qemu.git.
This worked fine except savevm-async.c. Right now pve registers some
custom callbacks for qemu_fopen_ops_buffered. But qemu devs removed this
options and so
Am 18.11.2012 10:28, schrieb Dietmar Maurer:
# ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio -enable-kvm -
hda image.raw ...
you meant -qmp not monitor?
no
Mhm with -monitor the qm monitor shell stuff doesn't work I always get the
failure that qmp human interface cannot be found.
Am 18.11.2012 12:51, schrieb Michael Rasmussen:
On Sat, 17 Nov 2012 20:14:04 +0100
Michael Rasmussen m...@datanom.net wrote:
On Sat, 17 Nov 2012 19:41:45 +0100
Stefan Priebe s.pri...@profihost.ag wrote:
Wouldn't it be useful to be able to move disks from VM X to VM Y. I don't like
Am 18.11.2012 16:00, schrieb Dietmar Maurer:
Wouldn't it be useful to be able to move disks from VM X to VM Y. I don't like
that disks are fixed to vms.
We assigns disks to VMs. So there is a Disk - VM (owner) relation.
This relation is the basic concept we use for cluster wide locking, and
From Stefan Priebe s.pri...@profihost.ag # This line is ignored.
From: Stefan Priebe s.pri...@profihost.ag
Cc: pve-devel@pve.proxmox.com
Cc: pbonz...@redhat.com
Cc: ceph-de...@vger.kernel.org
Subject: QEMU/PATCH: rbd block driver: fix race between completition and cancel
In-Reply-To:
ve-de
From: Stefan Priebe s.pri...@profhost.ag
This one fixes a race qemu also had in iscsi block driver between
cancellation and io completition.
qemu_rbd_aio_cancel was not synchronously waiting for the end of
the command.
It also removes the useless cancelled flag and introduces instead
a status
rbd / rados tends to return pretty often length of writes
or discarded blocks. These values might be bigger than int.
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block/rbd.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index
Hi Stefan,
Am 20.11.2012 17:29, schrieb Stefan Hajnoczi:
On Tue, Nov 20, 2012 at 01:44:55PM +0100, Stefan Priebe wrote:
rbd / rados tends to return pretty often length of writes
or discarded blocks. These values might be bigger than int.
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
Hello list,
does migration work for anybody with qemu.git? To me the qm monitor
always fails immediatly or after a very short period.
Greets Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
---
PVE/QemuServer.pm |4
1 file changed, 4 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7a11e82..7d4a1a1 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2155,6 +2155,10 @@ sub config_to_command {
push @$cmd, '-daemonize';
+push @$cmd,
Am 03.12.2012 17:06, schrieb Dietmar Maurer:
Cc: pve-devel@pve.proxmox.com; Stefan Priebe - Profihost AG
Subject: Re: [pve-devel] pve-storage : volume_lock volume_unlock
Why (beside beeing 'cool')?
I think Stefan want to test without manual patching.
Then it would make sense for a short
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
www/manager/VNCConsole.js |6 +++---
www/manager/qemu/CmdMenu.js |2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/www/manager/VNCConsole.js b/www/manager/VNCConsole.js
index 4be014d..ace2e87 100644
--- a/www
is enough to resize volume.
(this is the qemu rbd block driver which resize the volume and then advertise
the guest that the size have changed)
Just an idea, do you use rbd image format v1 or v2 ? (I only tested it with
format v1)
- Mail original -
De: Stefan Priebe - Profihost AG
Hi Alexandre,
it seems i've to trigger a scsi rescan
/sys/devices/pci:00/:00:05.0/virtio0/host2/target2:0:0/2:0:0:1/rescan
sad that this is really needed for virtio-scsi
Greets,
Stefan
Am 14.12.2012 20:54, schrieb Stefan Priebe:
Hi Alexandre,
you're correct your code is fine. I'm
=865b58c05b841964f48f574c2027311bd04db8a1
seem to be release after kernel 3.6, so maybe it's working with kernel 3.7.
what kernel version do you use in your guest ?
- Mail original -
De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel
Hello,
i would like to add support for linked disks. These are disks owned by a
VM but available in other VMs too. For example for GFS, OCFS2, ...
Is there a general interest in it?
Dietmar what would be the right way?
1.) introduce a command in qm cmd?
2.) introcude API2
3.) introduce Web
on storage)
Same as above.
Greets,
Stefan
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Dietmar Maurer diet...@proxmox.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Dimanche 16 Décembre 2012 09:21:25
Objet: Re: [pve-devel] introduce linked disks
Owner
Hi,
thanks. I think I need to mark a disk as shared - so that i can block
deletion whioe shared.
Does the API allow to modify a VM config running on host B from host A?
Greets,
Stefan
Am 18.12.2012 17:56, schrieb Dietmar Maurer:
sorry to ask this again - but i still don't know what you
Hi,
Am 23.12.2012 14:18, schrieb Alexandre DERUMIER:
I have redone tests on my side, with linux and windows guests, vms with 4 -
16GB ram
I really can't reproduce your problem.
migration speed is around 400MB/S.
How much mem was in use? Died you tried my suggestion with
find / -type f -print
DERUMIER aderum...@odiso.com
À: Stefan Priebe s.pri...@profihost.ag
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 24 Décembre 2012 15:38:13
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems
since qemu 1.3
maybe it's related to qmp queries to balloon driver (for stats) during
aderum...@odiso.com
À: Stefan Priebe s.pri...@profihost.ag
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 24 Décembre 2012 15:38:13
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems
since qemu 1.3
maybe it's related to qmp queries to balloon driver (for stats) during
migration
, starting from
pve ?
just to be sure, can you try to do info balloon from human monitor console ?
(I would like to see if the balloon driver is correctly working)
- Mail original -
De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel
Hi,
Am 26.12.2012 17:40, schrieb Alexandre DERUMIER:
I don't know if we really need a default value, because it's always setting
migrate_downtime to 1.
It also isn't accepted you get the answer back that 1 isn't a number.
Don't know what format a number needs?
Now, I don't know what really
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
PVE/QemuMigrate.pm | 28
PVE/QemuServer.pm | 15 ---
2 files changed, 24 insertions(+), 19 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 0711681..de84fed 100644
to be OK.
Greets,
Stefan
Am 26.12.2012 21:41, schrieb Stefan Priebe:
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
PVE/QemuMigrate.pm | 28
PVE/QemuServer.pm | 15 ---
2 files changed, 24 insertions(+), 19 deletions(-)
diff --git a/PVE
Hi,
Am 27.12.2012 11:21, schrieb Alexandre DERUMIER:
Sure you mean just the output in the web GUI?
yes
Output with latest qemu-server and latest pve-qemu-kvm.
VM with 4GB Mem and 100MB used (totally IDLE):
Dec 27 12:55:46 starting migration of VM 105 to node 'cloud1-1203'
(10.255.0.22)
Hi,
Am 27.12.2012 12:36, schrieb Dietmar Maurer:
for me expected downtime is 0 until the end of the migration.
Just uploaded a fix for that - please can you test:
https://git.proxmox.com/?p=pve-qemu-kvm.git;a=commit;h=ca5316794924e7af304c5af762d68e0f0e5cdc5d
Thanks sadly it doesn't fix the
Hi,
Am 27.12.2012 13:19, schrieb Dietmar Maurer:
The same again with 1GB Memory used (cached mem):
Does the cache contains duplicated pages? Or pages filled with zeroes?
Should'nt be zeros. If there are duplicated pages i've no idea.
I've done the following:
- boot VM (mem usage is 100MB)
-
Hi,
Am 27.12.2012 13:39, schrieb Alexandre DERUMIER:
not right now - but i tested this yesterday and didn't saw a difference
so i moved again to 3.6.11.
It'll do test with a 3.6 kernel too, to see if I have a difference
Thanks! Will retest with pve kernel too.
Do you have tried with last
Strangely the status of the VM is always paused after migration.
Stefan
Am 27.12.2012 15:18, schrieb Stefan Priebe:
Hi,
have now done the same.
With current git qemu migration is really fast using 1,6GB memory:
Dec 27 15:17:45 starting migration of VM 105 to node 'cloud1-1202'
(10.255.0.20
- downtime 603 ms
Dec 27 15:36:12 migration status: completed
Dec 27 15:36:16 migration finished successfuly (duration 00:01:39)
TASK OK
- Mail original -
De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet
is always paused after migration.
Stefan
Am 27.12.2012 15:18, schrieb Stefan Priebe:
Hi,
have now done the same.
With current git qemu migration is really fast using 1,6GB memory:
Dec 27 15:17:45 starting migration of VM 105 to node 'cloud1-1202'
(10.255.0.20)
Dec 27 15:17:45 copying disk images
Hello list,
to test a new patch i want to edit datacenter.cfg manually. But there is
a .version file inside /etc/pve. Also it seems i still get a cached version.
Is there a way to manually edit this file and update the cache / version?
Greets,
Stefan
This patch adds support for unsecure migration using a direct tcp connection
KVM = KVM instead of an extra SSH tunnel. Without ssh the limit is just the
bandwith and no longer the CPU / one single core.
You can enable this by adding:
migration_unsecure: 1
to datacenter.cfg
Examples using qemu
= migrate_max_downtime();
}
return 0;
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag
Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com
Envoyé: Vendredi 28 Décembre 2012 07:39:50
Objet: RE: [pve-devel] [PATCH
Hi Alexandre,
Am 29.12.2012 14:51, schrieb Alexandre DERUMIER:
Great to known that you finally found it !.
Not found ;-) just know that the rework of the migration code fixes it.
May be it might be a deadlock - they've changed the locking and
threading handling.
(Do you have respond to
Hi,
Am 28.12.2012 13:09, schrieb Dietmar Maurer:
This is just memory openssl speed. There i'm getting 600-700MB/s:
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes256 bytes 1024 bytes 8192
bytes
aes-128-cbc 648664.33k 688924.90k
Hi,
Am 29.12.2012 15:02, schrieb Alexandre DERUMIER:
oh, ok.
I'll tests the patchs mondays.
Thanks!
Does migration works fine for you with them ?
Yes - it works fine / perfectly. But i've no VM with a desktop to play
HD videos ;-)
Stefan
- Mail original -
De: Stefan Priebe
Hi,
i really like that idea instead of having a fixed downtime of 1s. What
is about starting at 0.5s and then add 0.5s every 15 runs?
Dietmar what's your opinion?
Greets,
Stefan
Am 27.12.2012 11:14, schrieb Alexandre Derumier:
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
successfuly (duration 00:00:19)
TASK OK
Greets,
Stefan
Am 29.12.2012 15:07, schrieb Stefan Priebe:
Hi,
Am 29.12.2012 15:02, schrieb Alexandre DERUMIER:
oh, ok.
I'll tests the patchs mondays.
Thanks!
Does migration works fine for you with them ?
Yes - it works fine / perfectly. But i've no VM
Hello,
today i tried for an emergency case to manually move qemu conf files.
But that was not possible.
I had 3 PVE nodes where 2 failed at the same time - caused by power failure.
I then tried to manually move the conf files to new only working node:
[cloud1-1202: /etc/pve]# mv
expected_downtime value)
(if remaining memory last remaing memory){
counter++
}
if(counter == 15){
set_migrate_downtime = migratedowntime + 0.X sec
counter = 0;
}
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Dietmar Maurer diet...@proxmox.com
Cc
Hi Alexandre,
attached you'll find a patchset with 10 more patches which make
migration even more faster. They're stable for me too.
Greets,
Stefan
Am 29.12.2012 15:54, schrieb Stefan Priebe - Profihost AG:
Great! To me even xzbrle works fine now.
Am 29.12.2012 um 15:47 schrieb Alexandre
Hi,
Am 30.12.2012 16:29, schrieb Alexandre DERUMIER:
Seem stable here too, thanks !
great! Willd definitely keep them in my branch. Migration is now as fast
and as stable i originally expected from kvm.
Dietmar are you interested too / have you thought about implement them
into
Hi,
Am 30.12.2012 16:31, schrieb Alexandre DERUMIER:
I'm testing your patch.
remaning memory can also stay at 0 at the end (with last qemu git migration
patch), so we need to also check that.
OK - will wait for your patch. Still missing part1.
also in your patch, what is
+
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
PVE/QemuMigrate.pm |6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 96dcae3..55dc2ad 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -475,6 +475,12 @@ sub phase2_cleanup
Am 02.01.2013 14:38, schrieb Alexandre DERUMIER:
Hi all (and happy new year ;),
Happy new Year!
Ceph 0.56 stable is coming soon (testing for now, but should go in stable
branch in coming days).
Do you need to keep compatibility with 0.48 clusters with the storage plugin ?
(as rbd is not
Any news on this?
Stefan
Am 14.01.2013 20:48, schrieb Stefan Priebe:
Hi Alexandre,
i can't help with your corosync problem but i'm running 3.6.11 on two
nodes and on another one 3.7.1 without a problem.
Stefan
Am 14.01.2013 20:47, schrieb Alexandre DERUMIER:
Ok, I found the problem.
I had
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
PVE/Storage/RBDPlugin.pm | 20 +---
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 1e27ccb..e77b768 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
PVE/Storage/RBDPlugin.pm | 20 +---
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 1e27ccb..3d15369 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE
Hi,
Am 26.01.2013 09:30, schrieb Dietmar Maurer:
I've removed them as they're just there to grab the raw device of the Bridge.
But we don't need that if we build the vlan on top of the bridge instead on
top of the raw device.
Yes, AFAIR but setting it on the bridge did not work with our
Hi,
Am 26.01.2013 09:27, schrieb Dietmar Maurer:
- activate gvrp by default (it doesn't harm if the switch does not support it or
it is disabled)
Please can you elaborate a bit on why we need gvrp?
Oh you don't need to configure VLANs on your switch for VMs.
Good example:
- you have a
Hi,
Am 27.01.2013 09:34, schrieb Dietmar Maurer:
No it doesn't. gvrp ist only available for vlan. You can't set it on a bridge.
Ah, I see. Let's wait until Alexandre finish his tests.
OK. For me it works fine with default PVE Kernel.
Stefan
___
Thanks!
Am 28.01.2013 12:00, schrieb Dietmar Maurer:
applied, thanks!
-Original Message-
From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
boun...@pve.proxmox.com] On Behalf Of Stefan Priebe
Sent: Freitag, 25. Jänner 2013 22:16
To: pve-devel@pve.proxmox.com
Subject: [pve-devel
Hi,
thanks. I've already ported all patches to current qemu git master
except the backup patches.
Everything works fine except the savevm-async - it compiles fine but the
VM hangs. Do you have already an idea what needs to be changes there? So
maybe i can start working on that.
Am
Hello,
i would like to allow a group of ldap users access to pve. But it seems
this is not possible. I have to specifiy the access level for each user.
Is this correct?
Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
guest vms are using vmbr1 as bridge
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 7 Février 2013 18:09:04
Objet: Re: [pve-devel] new bridge code doesn't work with redhat
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
Makefile |2 +-
debian/changelog |7 +++
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/Makefile b/Makefile
index 8222c39..c90c4d6 100644
--- a/Makefile
+++ b/Makefile
@@ -23,7 +23,7 @@ download
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
Makefile |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/Makefile b/Makefile
index c90c4d6..9eecd33 100644
--- a/Makefile
+++ b/Makefile
@@ -12,7 +12,7 @@ ARCH=amd64
KVM_DEB=${KVMPACKAGE}_${KVMVER}-${KVMPKGREL
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
debian/rules |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/debian/rules b/debian/rules
index 2f582f9..7939d87 100755
--- a/debian/rules
+++ b/debian/rules
@@ -78,7 +78,7 @@ install: build
# Install
Hi,
Am 12.02.2013 16:27, schrieb Dietmar Maurer:
OK just send a mail and i'll send you the patches.
Please can you send the patches now?
Sure. It will be a few more patches - as i've changed some more stuff just
drop the ones you don't need.
I just uploaded the updates for 1.4. Please can
Am 12.02.2013 20:20, schrieb Dietmar Maurer:
Seem there is an issue with expected_downtime. This is not set anymore?
Yes this does not exist anymore. The whole migration code changed a lot
- that's why its working now for me ;-)
But we still disable xbzlre - should we enable that now?
No,
failed - no such volume 'vmstorssd1:vm-123-disk-1'
INFO: Backup job finished with errors
TASK ERROR: job errors
-
Stefan
Am 16.02.2013 11:30, schrieb Stefan Priebe - Profihost AG:
Is the backup code ready for testing? Should it work with rbd? Anything special
thx you're too fast for me ;-)
Stefan
Am 16.02.2013 18:22, schrieb Alexandre Derumier:
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/VZDump/QemuServer.pm | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/PVE/VZDump/QemuServer.pm
nobody an opinion about that? To me it had happened several times that i
putted in 100MB/s and i wanted 100Mbit/s.
Stefan
Hello,
right now the network limit is in MB/s shouldn't it be Mbit/s like all
network stuff in the world?
Stefan
___
Hi,
1.) does the new Backup code snapshot mode mean the snapshot inkl. RAM?
or without RAM?
2.) I'm getting an error that only one Backup is allowed? But i can't
find a setting about that.
Thanks!
Greets,
Stefan
___
pve-devel mailing list
. But the migration
task says failed.
Stefan
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: pve-devel@pve.proxmox.com
Envoyé: Vendredi 22 Février 2013 15:01:25
Objet: [pve-devel] successfull migration but failed resume
Hello,
I've seen this sometimes
the source kvm process is
dead anyways so starting on the new target won't be a problem
4.) if the target host crashes while migrating the source host will
detect this and abort the migration + move the config back.
Greets,
Stefan
Am 24.02.2013 13:48, schrieb Stefan Priebe:
Hi Alexandre,
Am
Great to see this patch comming.
RHEL7 ? Is there even an ETA? And then OpenVZ Team needs again another
two years - to rebase their work on that... ;-(
Stefan
Am 24.02.2013 16:30, schrieb Alexandre DERUMIER:
Am 24.02.2013 14:44, schrieb Dietmar Maurer:
4.) if the target host crashes while migrating the source host will detect this
and
abort the migration + move the config back.
This is technically not possible - how do you detect that?
mhm good question... - was just a spontanious idea. I
Hi,
Am 24.02.2013 14:44, schrieb Alexandre DERUMIER:
The config file should always be with the kvm running.
Or you'll lost graphs stats for example during the migration. (not everybody
have 10gb link, so migration can take time)
and 4) if something bad happen and config is not moving back,
fails.
But not crashed i don't see a segfault. Most probably paused.
Stefan
- Mail original -
De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Dimanche 24 Février 2013 13:48:05
Objet: Re: [pve-devel] successfull
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
PVE/API2/Qemu.pm |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index bed2f4c..b26a0ce 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1137,10 +1137,10 @@ __PACKAGE__
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
PVE/API2/Qemu.pm |3 +++
1 file changed, 3 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index bed2f4c..32bacaf 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1123,6 +1123,9 @@ __PACKAGE__-register_method
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
qm |3 +++
1 file changed, 3 insertions(+)
diff --git a/qm b/qm
index 93f621c..2ab5eaa 100755
--- a/qm
+++ b/qm
@@ -48,6 +48,9 @@ sub run_vnc_proxy {
my $path = PVE::QemuServer::vnc_socket($vmid);
+my $c;
+while ( ++$c
do i need to add pve2.4?
sub extract_vm_stats
Am 10.03.2013 20:00, schrieb Stefan Priebe:
Where does this stuff go into rrd? My idea would be to split receive /
transmit into a subkey in $d.
So:
$d-{net0}{netin/out}
$d-{net1}{netin/out}
...
Stefan
Am 09.03.2013 11:04, schrieb Alexandre
Hi,
Am 11.03.2013 16:23, schrieb Dietmar Maurer:
do i need to add pve2.4?
next version will be 3.0 ;-)
This will require some low level C hacking in pve-cluster/data/src/status.c
(search for rrd).
We need to create a new rrd definition, and we can use key 'pve3-vmnet'.
Just tell if you need
Am 18.04.2013 06:32, schrieb Dietmar Maurer:
Then let's go this way. It's much simpler than adding RRD.
So the question is should this be a completely new call
Yes, I think this should be a new call:
GET /nodes/nodename/netstat
[
{vmid = 100, dev = net0, in = XXX, out = YYY},
{vmid = 100,
1 - 100 of 1160 matches
Mail list logo