Hi Dietmar,
sorry for the late reply.
Am 17.12.2012 07:49, schrieb Dietmar Maurer:
But shared disk are really usefull, I regulary use them for web cluster or
databases clusters.
Can you provide some details? What database or what web server is able to
use such shared disk?
The webserver or
Can you provide some details? What database or what web server is able
to use such shared disk?
webserver : all webservers (apache,nginx,...) under linux with a shared
filesystem (ocfs2,gfs,...).
Seem we talk about different things!
Stefan want to pass a file on GFS into several VMs.
Hi Dietmar,
Am 17.12.2012 10:48, schrieb Dietmar Maurer:
Can you provide some details? What database or what web server is able
to use such shared disk?
webserver : all webservers (apache,nginx,...) under linux with a shared
filesystem (ocfs2,gfs,...).
Seem we talk about different things!
Seem we talk about different things!
Stefan want to pass a file on GFS into several VMs.
IMHO, this is a safe way to destroy all data?
No, I think we talk about the same thing.
Sharing a disk between vm,
but of course vms need a cluster filesystem or an app (like sqlserver for
exemple)
webserver : all webservers (apache,nginx,...) under linux with a
shared filesystem (ocfs2,gfs,...).
Seem we talk about different things!
Stefan want to pass a file on GFS into several VMs.
IMHO, this is a safe way to destroy all data?
What do i want? I want to share a disk between
Hi Dietmar,
Am 17.12.2012 11:01, schrieb Dietmar Maurer:
Please can you describe exactly what you want to do? Form what I see you
want to run GFS on the host and pass file on GFS into the VM?
No.
Or do you run GFS inside the guest?
Yes! I'm using ocfs2 but that doesn't matter. The host isn't
Or do you run GFS inside the guest?
Yes! I'm using ocfs2 but that doesn't matter. The host isn't touched by this.
I'm using a cluster fs INSIDE guests.
So X guests can share the same disk and data.
That makes more sense now ;-)
___
pve-devel
Hi,
Am 17.12.2012 11:06, schrieb Dietmar Maurer:
Or do you run GFS inside the guest?
Yes! I'm using ocfs2 but that doesn't matter. The host isn't touched by this.
I'm using a cluster fs INSIDE guests.
So X guests can share the same disk and data.
That makes more sense now ;-)
Sorry for
store1:/0/vm-0-disk-1.raw
(owner is VM 0). But I am not sure if that is a good idea.
The idea is to have entries like this one:
shared_scsi1:vm-117-disk-5
shared_virtio2:vm-117-disk-9
We don't need the path as the PVE code always rely on the vm-(\d+)
number. So my idea was to do
Hi,
Am 17.12.2012 11:08, schrieb Dietmar Maurer:
store1:/0/vm-0-disk-1.raw
(owner is VM 0). But I am not sure if that is a good idea.
The idea is to have entries like this one:
shared_scsi1:vm-117-disk-5
shared_virtio2:vm-117-disk-9
We don't need the path as the PVE code always rely on the
Just committed the ballooning stats patches.
Ok, thanks.
Also added a fix so that we can set the polling interval at VM startup.
Great !
Any news to get all stats values in 1 qom get ?
Juts uploaded a patch for that.
___
pve-devel mailing list
Am 17.12.2012 11:08, schrieb Dietmar Maurer:
store1:/0/vm-0-disk-1.raw
(owner is VM 0). But I am not sure if that is a good idea.
The idea is to have entries like this one:
shared_scsi1:vm-117-disk-5
shared_virtio2:vm-117-disk-9
We don't need the path as the PVE code always rely
Hi Dietmar,
Am 17.12.2012 11:20, schrieb Dietmar Maurer:
Am 17.12.2012 11:08, schrieb Dietmar Maurer:
store1:/0/vm-0-disk-1.raw
(owner is VM 0). But I am not sure if that is a good idea.
The idea is to have entries like this one:
shared_scsi1:vm-117-disk-5
shared_virtio2:vm-117-disk-9
We
(owner is VM 0). But I am not sure if that is a good idea.
I didn't have thinked about it.
I think that the master need also too know where the disk is shared.
Because if we do a snapshot rollback for example, on the master, we need
to stop all vms where the disk is shared...
So do we need
So your idea is to prefix the controller (scsi, ide, virtio) on ALL guests.
No.
And the owner is just detected by the ID? (vm-$ID-disk-$I)
We already have an 'owner' for each volume (that is already implemented).
If (owner == 0) === shared disk
___
I like examples to be sure we're talking about the same thing. So you mean
like this:
VM 123
scsi1: ...,vm-123-disk5,...
owner = 1
no, owner = 123
VM 124
shared_scsi6: ...,vm-123-disk5,...
owner = 0
owner = 123
VM 125
shared_scsi7: ...,vm-123-disk5,...
owner = 0
owner =
We already have an 'owner' for each volume (that is already implemented).
Ah OK sorry didn't know that. How is that detected?
I like examples to be sure we're talking about the same thing. So you mean
like this:
VM 123
scsi1: ...,vm-123-disk5,...
owner = 1
The 'owner' is not a
Hi,
Am 17.12.2012 11:45, schrieb Dietmar Maurer:
I like examples to be sure we're talking about the same thing. So you mean
like this:
VM 123
scsi1: ...,vm-123-disk5,...
owner = 1
no, owner = 123
OK another question. Do we pass all params like cache I/O limits... to
shared guests? Or should
OK another question. Do we pass all params like cache I/O limits... to
shared guests? Or should this be configurable in shared guests too? I
would like too keep it as simple as possible and would pass these settings
from master
guest to the shared guests.
With my suggestion, there is
But I guess we should force cache=none for shared disk anyways?
Not sure about it, but I use directsync. (I'll retest it, but I think that
cache=none (writeback in guest), doesn't allow ocfs2 to start)
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Dietmar Maurer
applied, thanks!
- Dietmar
-Original Message-
From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier
Sent: Donnerstag, 13. Dezember 2012 15:41
To: pve-devel@pve.proxmox.com
Subject: [pve-devel] pve-manager : add hd resize
Am 17.12.2012 11:58, schrieb Dietmar Maurer:
OK another question. Do we pass all params like cache I/O limits... to shared
guests? Or should this be configurable in shared guests too? I would like too
keep it as simple as possible and would pass these settings from master
guest to the shared
Hi,
Am 17.12.2012 12:04, schrieb Alexandre DERUMIER:
But I guess we should force cache=none for shared disk anyways?
Not sure about it, but I use directsync. (I'll retest it, but I think that
cache=none (writeback in guest), doesn't allow ocfs2 to start)
Mhm i would say cache doesn't matter.
Hi all,
Storage migrate to a LVM volume means, for all I know, using dd
if=current_image of=new_image bs=1M. For two reason I wish there were
some other way of doing it:
1) Copies the entire block device bit by bit even if bits are zero
2) Painfully slow due to 1)
Any other, faster, way of
maybe you can try with qemu-img convert (this is what I use In my test code,
you can also use qcow2 or any storage as input)
qemu-img convert -f raw -O host_device myfile.raw /dev/mylvmdevice
- Mail original -
De: Michael Rasmussen m...@datanom.net
À: pve-devel@pve.proxmox.com
1) Copies the entire block device bit by bit even if bits are zero
2) Painfully slow due to 1)
But 1 is needed, because LVM does not initialize new volumes with zero.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Hi all,
I have designed the conceptual solution for off-line storage migration
as can be read below (Also attached for better readability). Every
usecase have been tested from the command line and found working. Do
you have any comments or have I left something out?
Storage migration
Two
27 matches
Mail list logo