Seems they already have a working version:
http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01538.html
But so far they did not post the complete patch as no RFC.
> -Original Message-
> From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
> boun...@pve.proxmox.com] On Behalf
> what if we add an 'ahci' option instead:
>
> ahci: 0|1
>
> If set, we use ahci/sata for ide drives, else we use normal ide mode.
>
> That way we have the same logic as we use for scsi (scsihw).
>
> We would also save some pci addresses.
>
> Using ide and sata at same time does not really mak
> I don't find any info about it.. :(
>
> Can we try with a tcp socket to see if we can do more than 1 connection ?
# kvm -qmp tcp:localhost:,server,nowait
# telnet 127.0.0.1
but this is also limited to one connection.
___
pve-devel mailing l
done
> Would you mind to bump the version? Right now i always got my
> authorized_key file mangled after adding a node ;-)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> here how I see things:
>
>
> for nexenta:
> no vmstate(no possible)
Why not possible? We can simply allocate a normal volume, and store state there?
> --
>- send freeze-fs via qemu-ga if available
>- use nexenta api to snapshot all disks
>- unfreeze freeze-fs via
>>Yes, but finally we want do a "blockdev_snapshot_sync"
so , no vmstate ? (because vmstate is only through savevm)
and blockdev-snapshot-sync works only for qcow2, and external snapshot(for the
moment)
>>- how does that call methods inside our library?
>>Or how do you plan to create a snapsho
> >>And you want to implement that API twice? Not really an option for me.
> I don't understand what you want to say. I just want to add a snapshot sub in
> the nexenta storage plugin something like this sub nexenta_snapshot_zvol {
>
> my $json = '{"method": "snapshot","object" : "zvol","p
>> >>But limited (no support for arbitrary snapshot trees)?
>> Yes indeed. (But try to do 1000 snapshots with qcow2 ;-)
>I never tried that - is it slow? Or what is the problem?
Yes it's slow. (But maybe it's better now,I have tested it 1 year ago).
Note that vmware by exemple, recommend to us
Am 30.08.2012 10:59, schrieb Stefan Priebe:
Am 30.08.2012 10:57, schrieb Dietmar Maurer:
Do we also like to check for keys without comments?
if ($line =~ m/^ssh-rsa\s+(\S+)/) {
ssh allows that? If so, we should also allow that.
man page says it is optional comment.
Would you mind to bump t
ok noproblem, this can wait qemu 1.3.
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 30 Août 2012 13:29:51
Objet: RE: [pve-devel] [PATCH 3/3] livemigrate : activate xbzrle cache
> sure no problem.
>
> maybe can we add
I found a ruby qmp client
https://github.com/lupine/qmp_client
they use a thread pool of 10 threads. So it should be possible to do more than
1 connection
- Mail original -
De: "Alexandre DERUMIER"
À: "Dietmar Maurer"
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 30 Août 2012 13:30:
I don't find any info about it.. :(
Can we try with a tcp socket to see if we can do more than 1 connection ?
- Mail original -
De: "Dietmar Maurer"
À: "Dietmar Maurer" , "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 30 Août 2012 10:25:45
Objet: RE: live migra
> sure no problem.
>
> maybe can we add a flag ? qm migrate --xbzrle.
Yes, If you want. But we will enable that anyways as soon as this is stable, so
maybe it is not worth
to introduce such option now.
___
pve-devel mailing list
pve-devel@pve.proxmox.c
sure no problem.
maybe can we add a flag ? qm migrate --xbzrle.
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre Derumier" , pve-devel@pve.proxmox.com
Envoyé: Jeudi 30 Août 2012 12:50:52
Objet: RE: [pve-devel] [PATCH 3/3] livemigrate : activate xbzrle cache
This does not wor
This does not work reliable for me. I get never ending migration (sometimes) if
I view a
video inside a VM.
Would you mind if we disable that for now?
> -Original Message-
> From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
> boun...@pve.proxmox.com] On Behalf Of Alexandre Derum
Am 30.08.2012 11:11, schrieb Dietmar Maurer:
Oh sorry. I'm travelling right now and Timo told me it looks like:
1024.000
Maybe the automatic update from bps to mbps can cause such numbers.
for 1024MB/s.
But normally, you simply get what you type on the GUI.
Sorry - I didnt saw that you al
Am 30.08.2012 11:11, schrieb Dietmar Maurer:
Oh sorry. I'm travelling right now and Timo told me it looks like:
1024.000
Maybe the automatic update from bps to mbps can cause such numbers.
for 1024MB/s.
But normally, you simply get what you type on the GUI.
perfect!
Stefan
__
> Oh sorry. I'm travelling right now and Timo told me it looks like:
> 1024.000
Maybe the automatic update from bps to mbps can cause such numbers.
> for 1024MB/s.
But normally, you simply get what you type on the GUI.
___
pve-devel mailing list
pve-
Am 30.08.2012 11:04, schrieb Dietmar Maurer:
What if we change 'bps' to 'mbps'. The following seems to work:
Thanks! But aren't 2 decimal places enough?
I do not modify or limit the number of decimal places. So what is the problem?
Oh sorry. I'm travelling right now and Timo told me it look
> >>> What if we change 'bps' to 'mbps'. The following seems to work:
>
> Thanks! But aren't 2 decimal places enough?
I do not modify or limit the number of decimal places. So what is the problem?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
ht
-Original Message-
From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
boun...@pve.proxmox.com] On Behalf Of Dietmar Maurer
Sent: Mittwoch, 29. August 2012 13:47
To: tgrodzinski; pve-devel@pve.proxmox.com
Subject: Re: [pve-devel] [PATCH] added renderer for hd strings in
hardware
ove
Am 29.08.2012 14:19, schrieb Timo Grodzinski:
Am 29.08.2012 14:01, schrieb Dietmar Maurer:
@timo: I guess this is close to your initial proposal? Or do you want
limit_rd/linit_wr?
The identifiers are not so important to us, it's more important that the
values are easy readable.
So I think tha
Am 30.08.2012 10:57, schrieb Dietmar Maurer:
Do we also like to check for keys without comments?
if ($line =~ m/^ssh-rsa\s+(\S+)/) {
ssh allows that? If so, we should also allow that.
man page says it is optional comment.
Stefan
___
pve-devel maili
> Do we also like to check for keys without comments?
> if ($line =~ m/^ssh-rsa\s+(\S+)/) {
ssh allows that? If so, we should also allow that.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Am 30.08.2012 07:31, schrieb Dietmar Maurer:
Ok, committed - please review and test:
https://git.proxmox.com/?p=pve-cluster.git;a=commitdiff;h=2055b0a9e41912cb02810b621608b24430c8a1fe
-Original Message-
From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
boun...@pve.proxmox.com]
Am 30.08.2012 07:05, schrieb Dietmar Maurer:
+my @lines = split(/\n/, $data);
+foreach my $line (@lines) {
+if ($line =~ m/^ssh-rsa\s+(\S+)\s+\S+$/) {
+next if ($vhash->{$1});
+$vhash->{$1} = 1;
+}
+$newdata .= $line . "\n";
}
-
-$
Am 30.08.2012 06:52, schrieb Dietmar Maurer:
Subject: [pve-devel] [PATCH] - preserve authorized_key key order - identify
double keys by key and not by comment
Signed-off-by: Stefan Priebe
---
data/PVE/Cluster.pm | 26 --
1 file changed, 12 insertions(+), 14 deletio
> > id 103 -chardev socket,id=qmp,path=/var/run/qemu-
> > server/103.qmp,server,nowait -mon chardev=qmp,mode=control -
>
> Maybe we should use mux=on?
Tried that, but monitor is still limited to one connection. Is that
intentional, or a bug?
___
pve-d
> # ps auxww|grep kvm
> root 110639 73.2 36.4 1803160 699452 ? Rl 09:42 6:22
> /usr/bin/kvm -
> id 103 -chardev socket,id=qmp,path=/var/run/qemu-
> server/103.qmp,server,nowait -mon chardev=qmp,mode=control -
Maybe we should use mux=on?
> so you can reproduce the bug ?
I saw several different problems. For example, after live migration I have a VM
103 running, and qmp socket seems to block:
# ps auxww|grep kvm
root 110639 73.2 36.4 1803160 699452 ? Rl 09:42 6:22 /usr/bin/kvm
-id 103 -chardev socket,id=qmp,path=/v
> so you can reproduce the bug ?
>
> One idea, with my patch,the vm config file is on source during migration and
> stats are done on disk with qmp during live migration.
>
> Maybe this can cause some queues in multiplexing part ?
Maybe, but my recent patches should avoid that problem?
I disabl
so you can reproduce the bug ?
One idea, with my patch,the vm config file is on source during migration and
stats are done on disk with qmp during live migration.
Maybe this can cause some queues in multiplexing part ?
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER"
> do you think it because of my livemigration patch ?
The whole thing is unstable now.
Added another patch to tolerate up to 5 query-migrate failures:
https://git.proxmox.com/?p=qemu-server.git;a=commitdiff;h=b0b756c14d58c3d84af41c7cd967cedddb32fd44
but it is still not reliable - got arbitrary
33 matches
Mail list logo