[pve-devel] 'Advanced' checkbox in web gui

2018-08-30 Thread Waschbüsch IT-Services GmbH
Hi there,

is there a way to enable the 'Advanced' view by default? I do not, for myself, 
see any benefit in hiding config options. It only means additional clicks for 
me.
If not, perhaps there could be a flag in datacenter.cfg?

Best,

Martin Waschbüsch

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Updated qemu pkg needed for Meltdown and Spectre?

2018-01-06 Thread Waschbüsch IT-Services GmbH

> Am 05.01.2018 um 21:41 schrieb Fabian Grünbichler 
> <f.gruenbich...@proxmox.com>:
> 
> On Fri, Jan 05, 2018 at 06:50:33PM +0100, Waschbüsch IT-Services GmbH wrote:
>> 
>> AFAIK Meltdown is only affecting Intel (& ARM), but not AMD - see 'Forcing 
>> direct cache loads' here:
>> 
>> https://lwn.net/SubscriberLink/742702/83606d2d267c0193/ 
>> <https://lwn.net/SubscriberLink/742702/83606d2d267c0193/> 
>> <https://lwn.net/SubscriberLink/742702/83606d2d267c0193/ 
>> <https://lwn.net/SubscriberLink/742702/83606d2d267c0193/>>
>> 
>> Does anyone know if the current patching efforts will differentiate between 
>> Intel and AMD x86-64 offerings?
>> 
>> I would hate to update kernels with these patches unless my systems are 
>> indeed affected.
>> Not because of possible performance impacts, mind, but because of stability.
>> I just feel it in my bones this major intervention is going to introduce 
>> regressions... :-(
> 
> the Meltdown fix (KPTI) is disabled on AMD by default (and also
> possible to disable using a kernel parameter on all platforms).
> 
> the (planned) Spectre fixes (Retpoline, IBRS and IBPB) are for all/most
> platforms and vendors, some of them will likely be exposed as kernel
> parameters, but some of them will likely only available as compile time
> options or not tunable at all.

Thanks! That is very good to know.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Updated qemu pkg needed for Meltdown and Spectre?

2018-01-05 Thread Waschbüsch IT-Services GmbH

> Am 05.01.2018 um 11:25 schrieb Fabian Grünbichler 
> :
> 
> On Thu, Jan 04, 2018 at 09:08:32PM +0100, Stefan Priebe - Profihost AG wrote:
>> 
>> Here we go - attached is the relevant patch - extracted from the
>> opensuse src.rpm.
> 
> this will most likely not be needed for some time, since a pre-requisite
> is having microcode and kernels supporting IBRS and IBPB.
> 
> the microcode update is still on-going (e.g., some vendors like Lenovo,
> Suse and RH have started releasing updates, but Intel still does not
> have a public package yet and Debian's partial update is only in
> unstable so far, likely taking at least a week to hit Stretch, and needs
> non-free enabled).
> 
> the kernel changes have been submitted by Intel as a first draft for
> discussion upstream.
> 
> the current plan is to release updated kernel packages ASAP based on 4.4
> and 4.13 with
> - final, tested KPTI patches (not yet available for 4.4 and 4.13!) to
>  fix MELTDOWN for the host kernel
> - backport / cherry-pick of KVM commit to prevent KVM guest->host
>  SPECTRE exploit


AFAIK Meltdown is only affecting Intel (& ARM), but not AMD - see 'Forcing 
direct cache loads' here:

https://lwn.net/SubscriberLink/742702/83606d2d267c0193/ 


Does anyone know if the current patching efforts will differentiate between 
Intel and AMD x86-64 offerings?

I would hate to update kernels with these patches unless my systems are indeed 
affected.
Not because of possible performance impacts, mind, but because of stability.
I just feel it in my bones this major intervention is going to introduce 
regressions... :-(
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] AMD EPYC

2017-11-03 Thread Waschbüsch IT-Services GmbH
Hi Martin,

The system is up and running now. Is IP Adress and user/pw for ssh and proxmox 
UI enough?
You could also have access to the IPMI based console of the box (if you promise 
not to break anything) ;-)

Martin

> Am 09.10.2017 um 10:59 schrieb Martin Maurer <mar...@proxmox.com>:
> 
> Hi,
> 
> yes, sounds interesting. please contact me directly as soon as you can 
> provide access for testing,
> 
> Martin
> 
> On 05.10.2017 09:56, Waschbüsch IT-Services GmbH wrote:
>> Hi all,
>> Since several times I read both on this list and on the forum that AMD based 
>> servers are rarely used for development / testing, I'd like to offer the 
>> following:
>> I just ordered a dual socket EPYC system (Supermicro AS-1123US-TR4 with dual 
>> EPYC 7351) and if any of the core developers wants to do some testing with 
>> it,
>> I'd be willing to give full access to the system for a limited time, say, a 
>> week or two?
>> Let me know if you're interested!
>> Martin
>> ___
>> pve-devel mailing list
>> pve-devel@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> --
> Best Regards,
> 
> Martin Maurer
> 
> mar...@proxmox.com
> http://www.proxmox.com
> 
> 
> Proxmox Server Solutions GmbH
> Bräuhausgasse 37, 1050 Vienna, Austria
> Commercial register no.: FN 258879 f
> Registration office: Handelsgericht Wien
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



signature.asc
Description: Message signed with OpenPGP
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] AMD EPYC

2017-10-17 Thread Waschbüsch IT-Services GmbH
Sure thing, will do that.

Delivery seems to take slightly longer than expected - I guess it's the usual 
difference between marketing / news release and actual deliverability. ;-)

I'll let you know asap!

Martin

> Am 09.10.2017 um 10:59 schrieb Martin Maurer <mar...@proxmox.com>:
> 
> Hi,
> 
> yes, sounds interesting. please contact me directly as soon as you can 
> provide access for testing,
> 
> Martin
> 
> On 05.10.2017 09:56, Waschbüsch IT-Services GmbH wrote:
>> Hi all,
>> Since several times I read both on this list and on the forum that AMD based 
>> servers are rarely used for development / testing, I'd like to offer the 
>> following:
>> I just ordered a dual socket EPYC system (Supermicro AS-1123US-TR4 with dual 
>> EPYC 7351) and if any of the core developers wants to do some testing with 
>> it,
>> I'd be willing to give full access to the system for a limited time, say, a 
>> week or two?
>> Let me know if you're interested!
>> Martin
>> ___
>> pve-devel mailing list
>> pve-devel@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> --
> Best Regards,
> 
> Martin Maurer
> 
> mar...@proxmox.com
> http://www.proxmox.com
> 
> 
> Proxmox Server Solutions GmbH
> Bräuhausgasse 37, 1050 Vienna, Austria
> Commercial register no.: FN 258879 f
> Registration office: Handelsgericht Wien
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



signature.asc
Description: Message signed with OpenPGP
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] AMD EPYC

2017-10-05 Thread Waschbüsch IT-Services GmbH
Hi all,


Since several times I read both on this list and on the forum that AMD based 
servers are rarely used for development / testing, I'd like to offer the 
following:

I just ordered a dual socket EPYC system (Supermicro AS-1123US-TR4 with dual 
EPYC 7351) and if any of the core developers wants to do some testing with it,
I'd be willing to give full access to the system for a limited time, say, a 
week or two?

Let me know if you're interested!

Martin


signature.asc
Description: Message signed with OpenPGP
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] Reflect changed output for 'ceph pg dump -f json'.

2017-08-23 Thread Waschbüsch IT-Services GmbH
Reflect changed output for 'ceph pg dump -f json'.

Signed-off-by: Martin Waschbüsch 
---
Without this patch, all osds will show a latency of 0.
Sadly, that is not true. :-)

PVE/API2/Ceph.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index c4d6ffcb..f8ee8b21 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -122,7 +122,7 @@ __PACKAGE__->register_method ({
$new->{total_space} = ($stat->{kb} || 1) * 1024;
$new->{bytes_used} = ($stat->{kb_used} || 0) * 1024;
$new->{percent_used} = 
($new->{bytes_used}*100)/$new->{total_space};
-   if (my $d = $stat->{fs_perf_stat}) {
+   if (my $d = $stat->{perf_stat}) {
$new->{commit_latency_ms} = $d->{commit_latency_ms};
$new->{apply_latency_ms} = $d->{apply_latency_ms};
}
-- 
2.11.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Ceph OSD latency changes

2017-08-14 Thread Waschbüsch IT-Services GmbH
Solved!

> Am 14.08.2017 um 22:34 schrieb Waschbüsch IT-Services GmbH 
> <serv...@waschbuesch.it>:
> 
> Be that as it may, the problem I have which lead me to have a look at it is: 
> in the UI *all* my osds show a latency of 0.
> Using the shell, they don't.

I was on the right track after all - it seems the output *has* changed after 
all, but only a wee bit:

--- Ceph.pm.orig2017-08-14 21:06:56.686005469 +
+++ Ceph.pm 2017-08-14 21:07:02.933759228 +
@@ -122,7 +122,7 @@
$new->{total_space} = ($stat->{kb} || 1) * 1024;
$new->{bytes_used} = ($stat->{kb_used} || 0) * 1024;
$new->{percent_used} = 
($new->{bytes_used}*100)/$new->{total_space};
-   if (my $d = $stat->{fs_perf_stat}) {
+   if (my $d = $stat->{perf_stat}) {
$new->{commit_latency_ms} = $d->{commit_latency_ms};
$new->{apply_latency_ms} = $d->{apply_latency_ms};
}


This change re-enables the latencies being displayed, at least for me. ;-)

Martin


signature.asc
Description: Message signed with OpenPGP
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Ceph OSD latency changes

2017-08-14 Thread Waschbüsch IT-Services GmbH

> Am 14.08.2017 um 20:39 schrieb Waschbüsch IT-Services GmbH 
> <serv...@waschbuesch.it>:
> 
> Hi all,
> 
> In API2/Ceph.pm
> 
> osd latency information are read from the output of $get_osd_usage sub by 
> running the monitor command 'pg dump'.
> I don't know if this used to contain the latency information for each osd, 
> but it does not in the current (luminous) tree.
> 
> I guess the information needs to be got from the output of 'osd perf'.
> So, I guess this *should* do the trick. However, I do not have a development 
> cluster up and running, so apologies for not being able to test this.

Correcting myself here:

ceph pg dump -f json does contain the perf data. the plain text version 
(without -f json) does not.

Be that as it may, the problem I have which lead me to have a look at it is: in 
the UI *all* my osds show a latency of 0.
Using the shell, they don't.

I'll keep looking, but perhaps someone else has an idea?

Martin


signature.asc
Description: Message signed with OpenPGP
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Ceph OSD latency changes

2017-08-14 Thread Waschbüsch IT-Services GmbH
Hi all,

In API2/Ceph.pm

osd latency information are read from the output of $get_osd_usage sub by 
running the monitor command 'pg dump'.
I don't know if this used to contain the latency information for each osd, but 
it does not in the current (luminous) tree.

I guess the information needs to be got from the output of 'osd perf'.
So, I guess this *should* do the trick. However, I do not have a development 
cluster up and running, so apologies for not being able to test this.



--- Ceph.pm.orig2017-08-14 18:27:04.441035244 +
+++ Ceph.pm 2017-08-14 18:37:30.799198316 +
@@ -99,6 +99,13 @@
$osdmetadata->{$osd->{id}} = $osd;
}

+   my $osdperfdata_tmp = $rados->mon_command({ prefix => 'osd perf' });
+
+   my $osdperfdata = {};
+   foreach my $osdperf (@$osdperfdata_tmp) {
+   $osdperfdata->{$osdperf->{id}} = $osdperf;
+   }
+
my $nodes = {};
my $newnodes = {};
foreach my $e (@{$res->{nodes}}) {
@@ -122,12 +129,13 @@
$new->{total_space} = ($stat->{kb} || 1) * 1024;
$new->{bytes_used} = ($stat->{kb_used} || 0) * 1024;
$new->{percent_used} = 
($new->{bytes_used}*100)/$new->{total_space};
-   if (my $d = $stat->{fs_perf_stat}) {
-   $new->{commit_latency_ms} = $d->{commit_latency_ms};
-   $new->{apply_latency_ms} = $d->{apply_latency_ms};
-   }
}

+   if (my $d = $osdperfdata->{$e->{id}}) {
+   $new->{commit_latency_ms} = $d->{commit_latency_ms};
+   $new->{apply_latency_ms} = $d->{apply_latency_ms};
+}
+
my $osdmd = $osdmetadata->{$e->{id}};
if ($e->{type} eq 'osd' && $osdmd) {
if ($osdmd->{bluefs}) {




signature.asc
Description: Message signed with OpenPGP
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] megaraid interrupts

2017-04-20 Thread Waschbüsch IT-Services GmbH
Hi all,

In case this is helpful to anyone else:

I just installed PVE 4.4 on a box with a megaraid controller (9261-8i).
For some reason, the device's interrupts where not distributed among CPU cores.
After digging a little, I found that the version of the megaraid driver that 
comes with the current pve-kernel (4.4.49-1-pve) is:

06.810.09.00-rc1
(Note: At least for my card, there is no stable version of the driver matching 
this release candidate.)

Anyway, more recent versions of the driver have resolved some issues around 
interrupt handling.

I found that, for my system, upgrading the driver to version

06.813.05.00

solved my issue and interrupts were distributed across CPU cores one more.

Best,

Martin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] HW driver / DKMS

2017-01-14 Thread Waschbüsch IT-Services GmbH

> Am 14.01.2017 um 10:29 schrieb Dmitry Petuhov :
> 
> Yes, you can. Just install pve-headers package corresponding to your running 
> kernel. Also you will have to manually install headers on every kernel update.
> Or you can just wait for next PVE kernel release. It usually contains latest 
> RAID (including aacraid) and NIC drivers.
> 
> But I don't think that you will have much gain of these features.

Thanks, Dmitry.

While not expecting any large performance gain, I am currently doing a 
comparison of two otherwise identical systems where one uses NUMA with 4 nodes 
and the other is set to enable node interleave, effectively rendering it a 
single node UMA system.
The goal is to see which is better suited for my specific workloads & VM usage.

Making as many components of the NUMA system NUMA aware sounds like a good idea 
in that context.

Martin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] HW driver / DKMS

2017-01-13 Thread Waschbüsch IT-Services GmbH
Hi there,

Can I use the dkms infrastructure with proxmox kernels?

I ask because there is a newer driver for current Microsemi / Adaptec RAID 
adapters:

http://download.adaptec.com/raid/aac/linux/aacraid-linux-src-1.2.1-52011.tgz
(or for dkms) 
http://download.adaptec.com/raid/aac/linux/aacraid-dkms-1.2.1-52011.tgz

Release notes: http://download.adaptec.com/pdfs/readme/relnotes_arc_11_2016.pdf

According to the release notes the driver enables:

 - NUMA support for the RAID controller products. This provides
improved performance under NUMA CPU architectures.
- Added command coalescing support for RAID devices for small
block sequential I/Os.


Both these features sound promising and I'd really like to give em a spin. ;-)

Thanks,

Martin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] VM config: User Interface <-> Online Help mismatch

2017-01-06 Thread Waschbüsch IT-Services GmbH
Hi Dietmar,

> Am 06.01.2017 um 12:39 schrieb Dietmar Maurer :
> 
>> The online help explains the ballooning feature quite nicely, but there is a
>> mismatch:
>> Under the 'Use fixed size memory' option, I can set the memory size and there
>> is a checkbox 'Ballooning'.
>> I find this confusing. If I set a fixed amount of memory, then that seems to
>> me to be mutually exclusive with 'Ballooning'.
> 
> The balloon driver is normally enabled, because it deliver some information
> about
> used memory (even if it does not manage memory). But some user
> reported kernel crashes with older kernels, so this option is there
> to disable the driver completely...

Thanks for the clarification!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] VM config: User Interface <-> Online Help mismatch

2017-01-05 Thread Waschbüsch IT-Services GmbH
Hi all,

I just stumbled across the following:

When configuring memory for a VM, you can choose between the options 'Use fixed 
size memory' and 'Automatically allocate memory within this range'.
The online help explains the ballooning feature quite nicely, but there is a 
mismatch:
Under the 'Use fixed size memory' option, I can set the memory size and there 
is a checkbox 'Ballooning'.
I find this confusing. If I set a fixed amount of memory, then that seems to me 
to be mutually exclusive with 'Ballooning'.
Also, the online help does not show this checkbox at all, so I wonder if this 
is a glitch in the UI and should be grouped differently? Or is the online help 
not up to date?

Most importantly, is my assumption right that Use fixed size memory and 
Ballooning are mutually exclusive?

Thanks,

Martin



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] upgrade for openvswitch takes node offline?

2016-12-02 Thread Waschbüsch IT-Services GmbH

> Am 02.12.2016 um 20:04 schrieb Michael Rasmussen <m...@datanom.net>:
> 
> On Fri, 2 Dec 2016 19:54:20 +0100
> Waschbüsch IT-Services GmbH <serv...@waschbuesch.it> wrote:
> 
>> 
>> Any ideas how that could be avoided? Like, at all. :-/
>> 
> Could you try when logged in to do: dpkg --configure -a

That comes back empty, since, luckily, I had started the upgrade in a screen 
session, so I could re-attach to it after logging in via serial console.
It was waiting for input on grup-pc (asking for the boot device).
So even with a bit of a delay, the upgrade ran through without errors.
That did not bring the vswitch-based interfaces back up, though.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] upgrade for openvswitch takes node offline?

2016-12-02 Thread Waschbüsch IT-Services GmbH
Hi all,

I just upgraded a current node running PVE 4.3 to the latest updates available 
on the enterprise repo.

Things work ok until apt gets to:

Preparing to unpack .../proxmox-ve_4.3-72_all.deb ...
Unpacking proxmox-ve (4.3-72) over (4.3-71) ...
Preparing to unpack .../openvswitch-switch_2.6.0-2_amd64.deb ...
packet_write_wait: Connection to 80.254.129.226 port 22: Broken pipe

Logging on via serial console, vswitch never came back online and all network 
interfaces relying on it are gone.

Any ideas how that could be avoided? Like, at all. :-/

Martin Waschbüsch
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] rbdplugin : disable debug_ms to increase performance

2016-11-06 Thread Waschbüsch IT-Services GmbH

> Am 05.11.2016 um 15:43 schrieb Alexandre Derumier :
> 
> This increase iops and decrease latencies by around 30%

Alexandre,

apart from debug_ms = 0/0, what are the currently suggested defaults for these 
performance tweaks?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-img convert : use default cache=unsafe instead writeback

2016-08-01 Thread Waschbüsch IT-Services GmbH

> Am 01.08.2016 um 09:44 schrieb Alexandre DERUMIER :
> 
>>> Answering myself, 'close' does not issue flush/fsync. 
> 
> close send a flush
> 
> It's was introduce by this commit
> 
> [Qemu-devel] [PATCH v3] qemu-img: let 'qemu-img convert' flush data
> https://lists.nongnu.org/archive/html/qemu-devel/2012-04/msg02936.html

thanks for the clarification!

Martin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-img convert : use default cache=unsafe instead writeback

2016-08-01 Thread Waschbüsch IT-Services GmbH

> Am 01.08.2016 um 09:26 schrieb Dominik Csapak :
> 
> On 08/01/2016 08:51 AM, Alexandre Derumier wrote:
>> Signed-off-by: Alexandre Derumier 
>> ---
>> PVE/QemuServer.pm | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
>> index 7778fb8..2414fd8 100644
>> --- a/PVE/QemuServer.pm
>> +++ b/PVE/QemuServer.pm
>> @@ -5605,7 +5605,7 @@ sub qemu_img_convert {
>>  my $dst_path = PVE::Storage::path($storecfg, $dst_volid);
>> 
>>  my $cmd = [];
>> -push @$cmd, '/usr/bin/qemu-img', 'convert', '-t', 'writeback', '-p', 
>> '-n';
>> +push @$cmd, '/usr/bin/qemu-img', 'convert', '-p', '-n';
>>  push @$cmd, '-s', $snapname if($snapname && $src_format eq "qcow2");
>>  push @$cmd, '-f', $src_format, '-O', $dst_format, $src_path;
>>  if ($is_zero_initialized) {
>> 
> 
> is this really safe?
> 
> this also impacts cloning and the "move disk" function.
> what if i clone a vm to an nfs share and immediately move the vm to another 
> host, then start it?

Also, when using move disk, will this make the 'Delete source' option 
potentially hazardous?
How do we determine that the disk has been completely and safely moved before 
removing the original?
I like performance improvements as much as the next guy, but I am sort of 
uneasy about this one.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Suggest VirtIO drivers as default when creating a Linux VM

2016-07-19 Thread Waschbüsch IT-Services GmbH

> Am 19.07.2016 um 13:00 schrieb Emmanuel Kasper :
> 
> Hi
> 
> This patch series adds capabilities to store Qemu Wizard Defaults, and
> use these capabilities
> to set virtio by default for Linux machines.

Sounds like a really good idea. But why not go a step further and allow to 
create presets to manage often used but widely different sets of settings?
E.g. I could add a preset for HA-managed VMs with storage on ceph while another 
preset could use local storage and virtio.
You could even tie those in with the resource pools, etc.

Martin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] MAC address generation

2016-07-08 Thread Waschbüsch IT-Services GmbH

> Am 07.07.2016 um 17:26 schrieb Andreas Steinel :
> 
> Hi,
> 
> I currently only have one big 3.4 install (>150 VMs), on which I compared
> the generated MACs and found out that they are completely random. Are there
> plans or probably is there already an implementation to generate only from
> a specific region? I did not found any changes in the roadmap.
> 
> In KVM/Qemu there is standard to use a prefix of 52:54:00, which was
> present in Proxmox some years ago.

While we are at it: are generated MAC addresses ensured to be unique across a 
cluster?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] criu for lxc news on criu.org

2015-03-29 Thread Waschbüsch IT-Services GmbH

 Am 29.03.2015 um 20:18 schrieb Daniel Hunsaker danhunsa...@gmail.com:
 
 There's Gentoo, which seemed pretty solid and stable while I was using it, 
 but I haven't looked at their kernels lately to see how they are faring...

But being a rolling-release OS, would that be at all suitable?

Martin


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Live migration should check for host compatibility?

2015-03-21 Thread Waschbüsch IT-Services GmbH
Hi all,

Martin has kindly redirected me to the list as the appropriate place to ask / 
discuss this:

I noticed that, even though a kvm guest is set to CPU type 'host', the live 
migration feature does not check for compatibility with the destination host.
E.g. Moving from a Opteron 6366 to an Intel Xeon E5420 is going to cause the 
migrated VM to choke and die due to lots of CPU features no longer being there.

That in itself is not surprising, but my suggestion would be to at least have a 
warning popup when choosing to migrate a kvm which is set to CPU type 'host'?

Best,

Martin


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel