do you think ?
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 6 Décembre 2012 13:47:43
Objet: Re: [pve-devel] qemu shutdown timeout
I thought the same but $timeout is 30 when
?
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: Dietmar Maurer diet...@proxmox.com, pve-devel@pve.proxmox.com
Envoyé: Mardi 4 Décembre 2012 16:26:24
Objet: Re: [pve-devel] ceph news: syncfs kernel support
Hi Alexandre,
Am 03.12.2012 08:53, schrieb Alexandre DERUMIER:
I think you should not use the term 'lock/unlock' here.
Maybe something like:
volume_protect(true/false)
volume_protect seem good. (as snapshot are not really readonly, and rbd already
use protect for the rbd command)
I'll
Great!
I've seen that you've removed the save-vm live patch? Is it already
upstream? Or is it now included in internal-snapshot-async.patch?
Greets,
Stefan
Am 23.11.2012 07:52, schrieb Dietmar Maurer:
Hi all,
I just updated the git repository for qemu 1.3 rc1:
Hello,
i send a new patch using ssize_t. (Subject [PATCH] overflow of int ret:
use ssize_t for ret)
Stefan
Am 22.11.2012 09:40, schrieb Peter Maydell:
On 22 November 2012 08:23, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 21.11.2012 23:32, schrieb Peter Maydell:
Looking
Hello list,
i was trying to make consistent rbd snapshots. What is the recommanded
method in PVE / qemu to flush all buffers / be able to make a consistent
backup?
--
Mit freundlichen Grüßen
Stefan Priebe
Bachelor of Science in Computer Science (BSCS)
Vorstand (CTO)
Yep - same to me. That was what i'm talking about.
Stefan
Am 22.11.2012 14:05, schrieb Dietmar Maurer:
Here is a test case:
https://lists.gnu.org/archive/html/qemu-devel/2012-11/msg02358.html
can someone reproduce?
-Original Message-
From: Dietmar Maurer
Sent: Donnerstag, 22.
.
(I have seny some prelimary patches some months ago, need to be polish)
Patches? this sounds experimental - patches for guest kernel? For qemu?
Greets,
Stefan
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: pve-devel@pve.proxmox.com
Envoyé: Jeudi 22 Novembre
Am 22.11.2012 15:12, schrieb Alexandre DERUMIER:
Am 22.11.2012 14:41, schrieb Alexandre DERUMIER:
I see 2 ways :
- snapshot with vmstate. (so pending write in memory are saved).
Does this work with rbd snap?
Yes. (Dietmar has implemented it with creating a rbd volume to store the
Am 21.11.2012 08:58, schrieb Dietmar Maurer:
For qemu.git we need some more patches i've already done:
1.) version kvm_user_version parsing does not work for qemu.git as they
have a , in their version output.
2.) we need to change the start commands for kvm and add:
-enable-kvm -qmp stdio
Am 21.11.2012 09:40, schrieb Dietmar Maurer:
Am 21.11.2012 09:15, schrieb Dietmar Maurer:
+push @$cmd, '-qmp', stdio;
+
What is this?
http://wiki.qemu.org/QMP
Without this the monitor does not work at all and -monitor stdio is only a
read only monitor where you can't change settings.
Not sure about off_t. What is min and max size?
Stefan
Am 21.11.2012 um 18:03 schrieb Stefan Weil s...@weilnetz.de:
Am 20.11.2012 13:44, schrieb Stefan Priebe:
rbd / rados tends to return pretty often length of writes
or discarded blocks. These values might be bigger than int.
Am 21.11.2012 07:41, schrieb Stefan Hajnoczi:
On Tue, Nov 20, 2012 at 8:16 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Hi Stefan,
Am 20.11.2012 17:29, schrieb Stefan Hajnoczi:
On Tue, Nov 20, 2012 at 01:44:55PM +0100, Stefan Priebe wrote:
rbd / rados tends to return pretty often length
you find them in the proxmox pve repo.
greets
Am 19.11.2012 16:59, schrieb Michael Rasmussen:
On Mon, 19 Nov 2012 16:55:42 +0100
Michael Rasmussen m...@datanom.net wrote:
Hi,
A Debian Squeeze with main, contrib and non-free does not provide all
packages needed to build:
E: Unable to locate
Yes that's right. We use ocfs2 and right now we need a 3rd vm exporting via
iscsi.
Stefan
Am 19.11.2012 um 08:24 schrieb Alexandre DERUMIER aderum...@odiso.com:
What about just renaming disks if you want to move disk from vm A to vm
B? Not sure if all storage backends support this but at
What about linking disks? You create a disk vm-117-1 then you can link this
disk to vm-112 and in config you find something like virtio0: vm-112-3|vm-117-1
Stefan
Am 19.11.2012 um 08:35 schrieb Stefan Priebe - Profihost AG
s.pri...@profihost.ag:
Yes that's right. We use ocfs2 and right now
Am 18.11.2012 um 07:31 schrieb Dietmar Maurer diet...@proxmox.com:
# ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio -enable-kvm -
hda image.raw ...
you meant -qmp not monitor?
no
Mhm with -monitor the qm monitor shell stuff doesn't work I always get the
failure that qmp human
Am 14.11.2012 08:22, schrieb Dietmar Maurer:
Am 14.11.2012 08:09, schrieb Dietmar Maurer:
Could you please test If that happens to you too? I would like to
verify that it's not my own fault before reporting.
Sure, if you provide me with the exact steps to reproduce it on latest qemu
git
Am 13.11.2012 14:04, schrieb Dietmar Maurer:
This is a preview of the planed backup feature.
The following patches are against latest qemu.git
I plan to send the patch to the qemu-devel list next week.
Before I do that, I would like to get some feedback about:
* functionality
*
Could you please test If that happens to you too? I would like to verify that
it's not my own fault before reporting.
Stefan
Am 14.11.2012 um 07:28 schrieb Dietmar Maurer diet...@proxmox.com:
Generated kvm command looks correct:
iops_rd=70,iops_wr=70,bps_rd=115343360,bps_wr=83886080
Any
Am 14.11.2012 08:09, schrieb Dietmar Maurer:
Could you please test If that happens to you too? I would like to verify that
it's not my own fault before reporting.
Sure, if you provide me with the exact steps to reproduce it on latest qemu git
code.
Why not first check if it happens with
Am 09.11.2012 17:02, schrieb Alexandre DERUMIER:
Oh great,
so it seem that you can get more ios with more vm. (It's become interesting
for me :)
What are you exact xeon models ? same server model,bios options ?
On KVM Host or for Ceph?
what happen on the dual xeon, if you shutdown some
Am 12.11.2012 08:51, schrieb Alexandre DERUMIER:
Right now RBD in KVM is limited by CPU speed.
Good to known, so it's seem lack of threading, or maybe somes locks. (so faster
cpu give more iops).
If you lauch parallel fio on same host on different guest, do you get more
total iops ? (for me
Hello,
rbd disk deletion takes pretty long. I've even seen a disk which took 1
hour to delete (testing new striping feature).
What would be the correct way for PVE to not fail on those long running
tasks?
Greets,
Stefan
___
pve-devel mailing list
Am 12.11.2012 13:46, schrieb Dietmar Maurer:
We need to start a task for such actions.
Any hint how todo that?
-Original Message-
From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
boun...@pve.proxmox.com] On Behalf Of Stefan Priebe - Profihost AG
Sent: Montag, 12. November
.
Stefan
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: eric e...@netwalk.com, pve-devel@pve.proxmox.com
Envoyé: Lundi 12 Novembre 2012 12:58:35
Objet: Re: [pve-devel] less cores more iops / speed
Am 12.11.2012 08:51
january ;-(
Greets,
Stefan
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 12 Novembre 2012 15:40:12
Objet: Re: [pve-devel] less cores more iops / speed
Am 12.11.2012 15:26
Am 09.11.2012 11:31, schrieb Alexandre DERUMIER:
Yes. CPU Load in Ceph for rand. 4k writes can be dropped by 50% when
disabling all debug setting (see my last post on ceph mailinglist).
I just see it, great,I'll test it on my side :)
(how many ssds - nodes ?) I hope that Intank is working on
is already running)
# ntpdate 0.debian.pool.ntp.org
8 Nov 08:53:16 ntpdate[731962]: the NTP socket is in use, exiting
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: pve-devel@pve.proxmox.com
Envoyé: Jeudi 8 Novembre 2012 08:38:49
Objet: [pve-devel] clocks
Am 08.11.2012 09:53, schrieb Alexandre DERUMIER:
Does it happen when you have a high load average ?
No machines are Idle but i can't understand why the clock changes are so
extreme...
Stefan
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre
Hello list,
inktank recommands using writeback mode for rbd by default. As it is
safe like a normal harddisk with it's cache.
What do you think?
Greets
Stefan
Hello list,
right now i'm testing rbd block devices with kvm using Default cache (no
cache).
Is there any recommanded value for
Am 02.11.2012 09:24, schrieb Dietmar Maurer:
I want to enable trim support rbd see:
http://ceph.com/docs/master/rbd/qemu-rbd/#enabling-discard-trim
I think we can generally enable it it doesn't harm.
Where can we do this?(which function)
That seem to be a highly experimental feature - only
Hello list,
right now with the latest patches, a tuned ciphers list and openssl
aes-ni support i'm able to migrate whole vm's with 100% memory usage
with 75MB/s instead of 41MB/s.
But this is still limited to the max speed i get with scp over 10GBE CPU
bound. As i use a seperated LAN for
Am 31.10.2012 07:28, schrieb Dietmar Maurer:
Subject: Re: [pve-devel] RRD graphics in webfrontend broken
Sorry last patch still contains a bug. The code can rename a .png.tmp file of
another process which has already started to write to that file.
So we need file locking too.
I finally
i think there are several reasons - high disk I/O - slow network speed
or whatever. You don't know how old the hardware is where people run
proxmox ;-)
I think even a value of 300s could be fine.
Stefan
Am 31.10.2012 06:18, schrieb Alexandre DERUMIER:
mmm,maybe 30s timeout is not big enough
Am 31.10.2012 05:59, schrieb Dietmar Maurer:
You mean for VNC Stuff - OpenVZ and Qemu? Yes we can as it is already TLS
but otherwise the overhead is low for VNC stuff.
Yes, I guess it is not worth to optimize that now.
The bad thing is that we can't just fire one command. We've to create
Am 31.10.2012 09:47, schrieb Dietmar Maurer:
Many thanks for that patch. But I am still unsure if we should go that way.
I just found another option - /root/.ssh/config
# man ssh_config
What if we simply create that file if it does not exist?
- create /.ssh/config with reasonable value for
Am 31.10.2012 14:26, schrieb Dietmar Maurer:
applied, thanks.
The following is needed to automatically create the config on package update:
https://git.proxmox.com/?p=pve-cluster.git;a=commitdiff;h=c1eb6bdfa127ff548e7e83141336019c7e06f38f
Thx - please fix the blowfish cipher - it's
Hi,
is it planned to have a central cipher configuration for all ssh tasks?
Or maybe even use netcat for migration stuff instead of ssh on trusted
networks?
I've a patched openssl version running which supports aes-ni so i would
like to use aes128-cbc instead of blowfish but right now i've
Or why to specify a cipher at all? Normally each system uses it's
ciphers list from /etc/ssh so the user can if je wants specify another
cipher order and the systems will use this list automatically.
Stefan
Am 30.10.2012 09:43, schrieb Stefan Priebe - Profihost AG:
Hi,
is it planned to have
Am 30.10.2012 11:40, schrieb Dietmar Maurer:
*argh* right. Mhm i think cfs_read_file should be a method in pve-
common.
Wouldn't that make sense?
no, that makes absolutely no sense.
Hint: cfs_read_file reads data from the cluster file system.
Yes i know - i'm still thinking about that
Am 30.10.2012 12:11, schrieb Dietmar Maurer:
Then we should just move AbstractMigrate.pm from pve-common to pve-
cluster? Do you see a problem here?
Yes, that introduce problems when you update that packages, because the file
is now inside 2 different packages (can cause nasty conflicts). Not
Am 30.10.2012 12:25, schrieb Dietmar Maurer:
so pve-common get's updated first removed the file and then the pve-cluster
get's installed after pve-common gets updated.
Yes, that is the theory. In practice, updates can fail at various stages, and
users do strange things.
Maybe we can simply
Am 30.10.2012 13:52, schrieb Dietmar Maurer:
Var B:
$ssh_opts = ['-c', 'blowfish'];
sub migrate {
my ($class, $node, $nodeip, $vmid, $opts, $ssh_opts) = @_;
That is OK for me.
OK thanks. How can we provide default values for datastore.cfg cipher
setting? Do we have to modify via
Hello,
it it known that the rrd graphics are sometimes broken?
I've seen the following symptoms:
- none graphic shown
- half graphic shown rest was incorrect pixels
- wrong graphic shown (i've seen load graph in cpu or even network
traffic graph in server load)
Could you point me to the perl
Am 29.10.2012 06:07, schrieb Dietmar Maurer:
i'm not sure if this is a bug or not but the memory usage info in proxmox pve
doesn't seem to be correct.
Memory usage on Host and inside Guest is something completely different.
I expected that but i was wondered about the big difference. When
Am 29.10.2012 12:04, schrieb Alexandre DERUMIER:
Or is this the expected idea:
mv /etc/pve/nodes/NODENAME/qemu-server/113.conf
/etc/pve/nodes/NEWNODENAME/qemu-server/
yes, this is the correct way.
if you down host restart, it will get the new vm locations from the cluster
config.
mhm so i
please read my 2nd mail. This is sadly not enough. As perl things the
circ. reference still exists when going out of scope.
We habe to mark the self object in mux as weaken.
Am 26.10.2012 15:24, schrieb Dietmar Maurer:
To fix this we need a DESTROY function in QMPClient:
sub DESTROY {
my
Hi,
while testing xbzrle i've seen it several times, that a migration fails
- no problem so far but BY is the VM stopped in that case instead of
just keep it running on the source node?
Output:
Oct 25 14:44:17 migration xbzrle cachesize: 4294967296 transferred 0
pages 0 cachemiss 1941340
Am 29.09.2012 um 08:26 schrieb Dietmar Maurer diet...@proxmox.com:
I thought of something like:
ssh cloud1-1200.de-nserver.de zfs send JBOD01Pool/vm-105-disk-
1@testsnap
| zfs recv tank/abc
or
ssh cloud1-1200.de-nserver.de zfs send
JBOD01Pool/vm-105-disk-1@spriebetest | gzip
Am 27.09.2012 06:43, schrieb Dietmar Maurer:
I also tried todo backups with snapshots but this doesn't seem to work:
INFO: starting new backup job: vzdump 100 --remove 0 --mode snapshot --
compress lzo --storage backuplocal --node serv121
INFO: Starting Backup of VM 100 (qemu)
INFO: status =
Am 27.09.2012 09:26, schrieb Dietmar Maurer:
The idea is that we do not backup any snapshot data. The vzdump would
only include the data of from the running instance. I guess that is OK?
But isn't the correct way to make a snapshot and then compress the
snapshot? How can vzdump verify that the
Hello list,
while under windows the console works fine with original Oracle Java i
can't get it to work with
OpenJDK and icedtea plugin.
This page works fine with chromium:
http://www.java.com/de/download/testjava.jsp
But when i try to access the proxmox konsole - i get the jimmy is dead
Priebe - Profihost AG
s.pri...@profihost.ag
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 13 Septembre 2012 06:35:22
Objet: RE: [pve-devel] API to get mac and vm id
I would like to create a small tool which will act as a dhcp addon to
proxmox and simulate a dhcp server with perl.
Great ! I would like
Hello list,
I would like to create a small tool which will act as a dhcp addon to proxmox
and simulate a dhcp server with perl.
For this I need a way to get all vm ids and their nic's through the proxmox
api. Is there a good starting example how to use the API for this?
Is there a way at all?
701 - 755 of 755 matches
Mail list logo