We are currently in the process to roll out release 3.4 (hopefully today).
Oh ok, great !
I plan to start Debian Jessie development now, so all new patches are
targeted for the Jessie based release (4.0).
Great Great :) (time for systemd troll ? ;)
I will backport imported fixes to 3.4,
Hi, any comments about my last patches ?
qemu-server
--
[PATCH] disable hyper-v enlightement for xvga pci passthrough
[pve-devel] qemu-server : move global iothread option as drive option
[pve-devel] qemu-server : unplug scsi controller if no more disk exist
[PATCH] unplug scsi
Hi, any comments about my last patches ?
Hi Alexandre,
I will take a look at those patches next week.
We are currently in the process to roll out release 3.4 (hopefully today).
I plan to start Debian Jessie development now, so all new patches are
targeted for the Jessie based release (4.0).
Hi all,
As a follow up to my post to the forum
(http://forum.proxmox.com/threads/21083-Proxmox-VE-3-4-released!?p=107495#post107495)
I was wondering if anything has changed in 3.4 in the way quorum is
formed?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael at rasmussen
I see that memory hotplug is experimental...
Is there some specific system/requirement to use this?
2015-02-19 11:47 GMT-02:00 Gilberto Nunes gilberto.nune...@gmail.com:
Nice work
2015-02-19 11:38 GMT-02:00 Martin Maurer mar...@proxmox.com:
Hi all,
We just released Proxmox VE 3.4 - the
Hi all,
We just released Proxmox VE 3.4 - the new version is centered around
ZFS, including a new ISO installer supporting all ZFS raid levels. Small
but quite useful additions includes hotplug, pending changes,
start/stop/migrate all VM´s and the network disconnect (unplug virtual
NIC).
Nice work
2015-02-19 11:38 GMT-02:00 Martin Maurer mar...@proxmox.com:
Hi all,
We just released Proxmox VE 3.4 - the new version is centered around ZFS,
including a new ISO installer supporting all ZFS raid levels. Small but
quite useful additions includes hotplug, pending changes,
I guess this would only make sense if qemu people accept such patches
upstream.
Maybe worth to ask on the qemu list for opinions
Just found this
guest agent: add guest-exec and guest-exec-status interfaces
https://www.mail-archive.com/qemu-devel@nongnu.org/msg271775.html
(I think they are
Also,
has someone already have a look at the new vhost-user feature ?
http://www.redhat.com/archives/libvir-list/2014-May/msg00934.html
http://www.virtualopensystems.com/fr/solutions/guides/snabbswitch-qemu/
openstack is going to implement it, and also new snabbswitch virtual switch.
I don't
Thanks a lot everyone involved!
On 19/02/15 14:38, Martin Maurer wrote:
Hi all,
We just released Proxmox VE 3.4 - the new version is centered around
ZFS, including a new ISO installer supporting all ZFS raid levels.
Small but quite useful additions includes hotplug, pending changes,
Hi,
I have upgrade my production cluster to proxmox 3.4,
and I notice some visual refresh glitch on hardware view.
Seem that default values are display (memory 512mb,...), and half second later
the pending api refresh with corrects values.
(All is working fine, but it's just visually
On Thu, 19 Feb 2015 21:18:23 +0100
Michael Rasmussen m...@datanom.net wrote:
Hi all,
As a follow up to my post to the forum
(http://forum.proxmox.com/threads/21083-Proxmox-VE-3-4-released!?p=107495#post107495)
I was wondering if anything has changed in 3.4 in the way quorum is
formed?
It
Seem to come from ObjectGrid.js,
when we add default value to diffstore.
if (rows) {
Ext.Object.each(rows, function(key, rowdef) {
if (Ext.isDefined(rowdef.defaultValue)) {
-- store.add({ key: key, value: rowdef.defaultValue });
I am giving up for now and have reverted back to
pve-kernel-2.6.32-34-pve.
I forgot to mention that my conclusion is that the infiniband drivers
delivered with pve-kernel-2.6.32-37-pve is broken.
I verified the corosync versions, and there was no update recently.
So I guess a driver
On Fri, 20 Feb 2015 00:29:36 +0100
Michael Rasmussen m...@datanom.net wrote:
I am giving up for now and have reverted back to
pve-kernel-2.6.32-34-pve.
I forgot to mention that my conclusion is that the infiniband drivers
delivered with pve-kernel-2.6.32-37-pve is broken.
--
Hilsen/Regards
On Thu, 19 Feb 2015 23:55:27 +0100
Michael Rasmussen m...@datanom.net wrote:
Now it gets really strange. If I configure cluster to use unicast then
suddenly multicast starts to work!!!
I am giving up for now and have reverted back to
pve-kernel-2.6.32-34-pve.
--
Hilsen/Regards
Michael
On Thu, 19 Feb 2015 23:29:17 +0100
Michael Rasmussen m...@datanom.net wrote:
It seems this is the case. Multicast over infiniband is broken on
pve-kernel-2.6.32-37-pve!!
Just to be curtain I added netmtu=2044 to totem in cluster.conf but
this did not change anything and was not needed on
17 matches
Mail list logo