Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 4ccf10a..054394b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6144,7 +6144,8 @@ sub
This allow to import an external disk in a vm.
qmimport
Signed-off-by: Alexandre Derumier
---
Makefile| 1 +
PVE/CLI/Makefile| 2 +-
PVE/CLI/qmimport.pm | 140
qmimport| 8 +++
4
Hi,
This is second attempt to have a command to import external disk with a single
command line.
This time, I have a separate qmimport, cli only and only for root.
If it's really not possible to include such feature in proxmox,
could you review patch 1 & 2 to include them upstream ?
They are
This allow to use a path as src image directly, without storeid.
We don't try to detect the src format ourself (vmware can export raw file with
vmdk extension for example)
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 67
It seems that the current implementation is much better than it was in the
RHEL-based kernel.
On Thu, Jan 19, 2017 at 9:43 AM, Alexandre DERUMIER
wrote:
> Hi,
>
> I have reenable THP ( transparent_hugepage=madvise) since around 1 year
> (with pve-kernel 4.2-4.4), and I
I think it is good practice to specify the default value:
> +my %scsiblock_fmt = (
> +scsiblock => {
> + type => 'boolean',
> + description => "whether to use scsi-block for full passthrough of host
> block
> device\n\nWARNING: can lead to I/O errors in combination with low memory or
>>well... you can't have your cake and eat it too? or in other words,
>>nothing in life is free..
Well, to have a real example, I have a customer which have migrated around 50
vmware vms with exactly your steps,
and he said me that's easy really to slow/long will all this steps.
Just to
since it can cause I/O errors and data corruption in low
memory or highly fragmented memory situations since Qemu 2.7
use scsi-hd by default instead
Signed-off-by: Fabian Grünbichler
---
note: at least until this is fixed in Qemu
PVE/QemuServer.pm | 12 +++-
Signed-off-by: Dominik Csapak
---
pvecm.adoc | 8
1 file changed, 8 insertions(+)
diff --git a/pvecm.adoc b/pvecm.adoc
index baf9300..491b2ac 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -864,6 +864,14 @@ migrations. This can be done via the configuration file
this adds the distinction between online/offline migration for vms,
and explains what happens during an online migration and explains,
what the requirements are
Signed-off-by: Dominik Csapak
---
i added the requirements, so that we can refer to the documentation,
when you
s/unless/until/
added a comma to split the sentence
Signed-off-by: Dominik Csapak
---
ha-manager.adoc | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/ha-manager.adoc b/ha-manager.adoc
index b1ca0c7..47fbe7c 100644
--- a/ha-manager.adoc
+++
Do not allocate the HA Environment every time we fork a new CRM or
LRM, but once at the start of the Simulator for all nodes.
This can be done as the Env does not saves any state and thus can be
reused, we use this also in the TestHardware class.
Making the behavior of both Hardware classes more
changes since v1:
* dropped patch 2/7 "sim: use HA envs after_fork to close all inherited FDs"
as it was to hacky and could cause problems.
* implement the semantics from the dropped patch in an easier way in the
"factor out and unify sim_hardware_cmd" patch, simply pass the lock
filehandle
Add a delete button to each service entry row. This allows deleting a
service on runtime.
Signed-off-by: Thomas Lamprecht
---
no changes
src/PVE/HA/Sim/RTHardware.pm | 52
1 file changed, 52 insertions(+)
diff --git
Will be used to allow adding services to the simulator on runtime in
a future patch.
Signed-off-by: Thomas Lamprecht
---
no changes
src/PVE/HA/Sim/RTHardware.pm | 92
1 file changed, 51 insertions(+), 41 deletions(-)
diff
Most things done by sim_hardware_cmd are already abstracted and
available in both, the TestHardware and the RTHardware class.
Abstract out the CRM and LRM control to allow the unification of both
classes sim_hardware_cmd.
As in the last year mostly the regression test systems TestHardware
class
On 01/18/2017 05:56 PM, Dietmar Maurer wrote:
Can we simple use:
gettext('Enable Firewall') => gettext('Enable')
gettext('Enable DHCP') => 'DHCP'
gettext('Enable XXYYYZZZ') => 'XXYYYZZZ'
unless there is a real reason to use 'blah blah XXYYYZZZ blah' ...
We could even display the actual
On Thu, Jan 19, 2017 at 10:24:17AM +0100, Alexandre DERUMIER wrote:
> The problem with 1),
>
> it's that users don't have always local file storage space.
> (for example, they have only small local disk and disks are on a san, or they
> have big lvmthin storage but no space for files)
>
> My
The problem with 1),
it's that users don't have always local file storage space.
(for example, they have only small local disk and disks are on a san, or they
have big lvmthin storage but no space for files)
My students/customers generaly export image on usb drive or network share,
and don't
On Thu, Jan 19, 2017 at 07:17:29AM +0100, Alexandre Derumier wrote:
> We had discuted last year about a method to import external image
> (vmware,xen,hyperv) to a proxmox vm
> http://pve.proxmox.com/pipermail/pve-devel/2016-January/018757.html
>
> This mostly the feature that all my training
On 01/19/2017 07:46 AM, Alexandre DERUMIER wrote:
> Hi,
>
> as a first step, I have send patch to import external disk image.
>
> now for ova import,
> I think we need a method to extract files from ova (it's a simple tar file),
>
> then an xml parser to parse OVF descriptor
>
> Does somebody
Hi,
I have reenable THP ( transparent_hugepage=madvise) since around 1 year (with
pve-kernel 4.2-4.4), and I don't have problem anymore like in the past.
I'm hosting a lot of database (mysql,sqlserver, redis, mongo,...) and I don't
have seen performance impact since I have reenable THP.
So I
So it seems like the recently reported problems[1] with disk pass
through using virtio-scsi(-single) are caused by a combination of Qemu
since 2.7 not handling memory fragmentation (well) and our compiled-in
default of disabling transparent huge pages on the kernel side.
While I will investigate
23 matches
Mail list logo