No this is the real solution.
The first patch was only the workaround and will not come in the code.
On 09/20/2016 07:37 AM, Alexandre DERUMIER wrote:
> Do we need this patch, with the new qemu-kvm patch ?
>
>
> - Mail original -
> De: "Wolfgang Link"
> À:
One thing that I think it could be great,
is to be able to have unique vmid across differents proxmox clusters.
maybe with a letter prefix for example (cluster1: vmid: a100 , cluster2:
vmid:b100).
Like this, it could be possible migration vm across cluster (offline, or maybe
even online),
and
Do we need this patch, with the new qemu-kvm patch ?
- Mail original -
De: "Wolfgang Link"
À: "pve-devel"
Envoyé: Vendredi 16 Septembre 2016 13:14:51
Objet: [pve-devel] [PATCH qemu-server] fix Bug #615 Windows guests
suddenly hangs
Ok,
I have test the patched kernel,
and I still have the same behavior, the frequency still go up and down.
(But it's not stuck anymore, like the bug I have see last week)
I think this is the normal behavior of pstate, as they are a lower limit.
But for virtualisation, I think it's really bad
From: "Dr. David Alan Gilbert"
Load the LAPIC state during post_load (rather than when the CPU
starts).
This allows an interrupt to be delivered from the ioapic to
the lapic prior to cpu loading, in particular the RTC that starts
ticking as soon as we load it's state.
Partially
Because it actually supports subvols, but it just was not checked.
Signed-off-by: Dmitry Petuhov
---
PVE/Storage/NFSPlugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
index df00f37..bfc2356
This patch series simplifies LXC volume creation and makes it independent
of storage plugin names. It will allow to use LXC with custom plugins.
Fixed version by comments.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Instead, rely on plugin-defined supported formats to decide if it
supports subvols.
Signed-off-by: Dmitry Petuhov
---
Still workaround 'rbd' plugin, that cannot be used for LXC without
krbd. Maybe better way is to pass $scfg into plugin's plugindata()
in storage code? So
On 09/19/2016 11:23 AM, Emmanuel Kasper wrote:
> On 09/19/2016 10:38 AM, Caspar Smit wrote:
>> Ok, but since the scsihw: 'virtio-scsi-pci' is set at the generic OSdefault
>> template and the w2k OSdefaults has a parent generic, doesn't that inherit
>> all settings from generic? Why else does it
On 09/19/2016 10:38 AM, Caspar Smit wrote:
> Ok, but since the scsihw: 'virtio-scsi-pci' is set at the generic OSdefault
> template and the w2k OSdefaults has a parent generic, doesn't that inherit
> all settings from generic? Why else does it need a parent?
>
> As i read the code the 'w2k'
Ok, but since the scsihw: 'virtio-scsi-pci' is set at the generic OSdefault
template and the w2k OSdefaults has a parent generic, doesn't that inherit
all settings from generic? Why else does it need a parent?
As i read the code the 'w2k' OSdefaults are:
busType: 'ide' (from generic parent)
> Also, why I can't set volume size to zero on volume creation via WEB UI
> and so use subvols? Is it bug?
Using simply directories is a hack, because there is no disk quota
in that case. That is why I disabled it on the GUI. Or is it a feature?
___
On Mon, Sep 19, 2016 at 10:37:01AM +0300, Dmitry Petuhov wrote:
> 19.09.2016 08:51, Fabian Grünbichler wrote:
> >
> > general remark, rest of comments inline:
> >
> > the indentation is all messed up. I know our codebase is
On 09/16/2016 03:11 PM, Caspar Smit wrote:
> Hi,
>
> I'm assuming this commit will break the 'w2k' pveOS default (because the
> scsihw will be inherited from generic):
not really, because presetting a different kind of SCSI controller will
not impact the _default_ controller which will still be
>>And it's being changed based on cpu load, like actual governor is ondemand.
From what I read, the intel pstate "performance", have a range min_freq /
max_freq.
So it seem to be different than cpufreq "performance", which is "max
performance"
(min_freq can be change manually through sysfs,
CVE-2016-7170: vmsvga: correct bitmap and pixmap size checks
CVE-2016-7421: scsi: pvscsi: limit process IO loop to ring size
CVE-2016-7423: scsi: mptsas: use g_new0 to allocate MPTSASRequest object
---
...vga-correct-bitmap-and-pixmap-size-checks.patch | 45 ++
19.09.2016 01:29, Alexandre DERUMIER wrote:
Hi,
I have add some strange behaviour of some host last week, (cpu performance
degrading)
and I have found than 3 hosts of my 15 host cluster have wrong cpu frequency.
All nodes are dell r630, with xeon v3 3,1ghz. (all with last bios/microcode
19.09.2016 08:51, Fabian Grünbichler wrote:
I sent you some feedback on patch #1 yesterday.. Since it breaks
stuff, it can't be merged (yet). Feel free to send an updated v2 (or
if you feel the feedback requires discussion, feel free to respond).
Thanks!
Sorry, did not received it. Maybe
Am 19.09.2016 um 09:01 schrieb Alexandre DERUMIER:
>>> @alexandre: please can you test if that solves your problem?
>
> I'll try. does I need to apply the whole patches series ?
normally not. Just apply the whole cpufreq series. If it does not
compile please report to me. I'll point you to the
>>@alexandre: please can you test if that solves your problem?
I'll try. does I need to apply the whole patches series ?
- Mail original -
De: "dietmar"
À: "Stefan Priebe, Profihost AG" , "pve-devel"
Cc:
> It's become to be difficult to manage, as we can't easily migrate vm across
> clusters
>>But this is difficult because there is no shared storage in that case?
I have local dc storage, but shared across dc. (mainly do live migration, then
storage migration).
But In the future (>3-5year),
>>What problem do you want to solve exactly? Only more nodes? Or nodes
>>between physically distant data centers?
mainly more nodes. multidatacenter is a plus. (I mainly use multidatacenter to
do live migration, then storage migration on local dc storage).
>>An alternative plan would be to
Am 19.09.2016 um 07:36 schrieb Dietmar Maurer:
>> The cpufreq and intel pstate driver were somewhat broken in 4.4 there were a
>> lot of changes in 4.5 or 4.6 (can't remember). I'm using around 20 cpufreq
>> (also a lot of optimizations) patches in 4.4.
>>
>> I grabbed those from mr hoffstaette
23 matches
Mail list logo