Does anyone here use an HP RAID (HP P410, etc.) with ProxMox and monitor
the SMART status on the drives using smartctl or another method? I want
to monitory my server via SNMP for drive health and I am attempting to
determine the best way to do this.
Thanks,
-Hexis
Thanks for pointing this out.
I'll try offline migration and see how it behaves.
Le 04/10/2016 à 12:59, Alexandre DERUMIER a écrit :
> But If you do the migration offline, it should work. (don't known if you can
> stop your vms during the migration)
>
> - Mail original -
> De:
>
> pve-qemu-kvm (2.6.1-2) unstable; urgency=medium
>* virtio related live migration fixes
Ha ! That's very interesting, maybe that would fix my problem
of moving VM disks to new storages shutting down the VM.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
> Now about migration, maybe is it a qemu bug, but I never hit it.
>
> Do you have the same problem without HA enabled ?
> Can you reproduce it 100% ?
>
Yes, 100% when memory hotplug enabled.
Besides i found an interesting update to qemu-kvm, because i'm using
version 2.6-1 on all nodes :
Hi Marco,
On 10/04/2016 05:43 PM, Marco Gaiarin wrote:
> Mandi! Alwin Antreich
> In chel di` si favelave...
>
>> Only one is needed and on PVE 4 the default is timesyncd.
>
> OK, i've removed 'ntp' in 'apt-get install' command on the wiki page.
As a remark, you need to configure
Probably is a dumb question. But i've not found an answer on wiki...
There's a way to 'run' a VM for non intel/amd CPUs? And manage it via
PVE interface?
I've an old SUN UltraSparc5 laying around, that i need to keep...
--
dott. Marco Gaiarin GNUPG Key
Hi,
We was interested by setup a SSD cache tiering for our RBD storage but in the
CEPH documentation, it seems to not recommend it :
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/?highlight=tier#known-bad-workloads
Has any one have experience with it ?
Mathieu
- Original
Hi,
On 10/04/2016 09:30 AM, Marco Gaiarin wrote:
Probably is a dumb question. But i've not found an answer on wiki...
no dumb question!
There's a way to 'run' a VM for non intel/amd CPUs? And manage it via
PVE interface?
No currently not, our KVM/QEMU package gets compiled with the
Hello Brian,
Thanks for the tip, it may be my last chance solution..
Fortunatly i kept all original disk files on a NFS share, so i 'm able
to rollback and re-do the migration...if manage to make qemu mirroring
work with sparse vmdks..
Le 03/10/2016 à 21:11, Brian :: a écrit :
> Hi
On 03/10/16 21:12, Brian :: wrote:
> Jeasuss - someone got out of the bed on the wrong side today!
>
:-)
> I've just been working on something that had had me stuck in the 4.3
> UI for the past 48 hours on and off.
> Personally I like it but thats just my opinion - and I did give the
> guys
Hi,
the limitation come from nfs. (proxmox use correctly detect-zeroes, but nfs
protocol have limitations, I think it'll be fixed in nfs 4.2)
I have same problem when I migrate from nfs to ceph.
I'm using discard/triming to retrieve space after migration.
- Mail original -
De: "Dhaussy
Mandi! Thomas Lamprecht
In chel di` si favelave...
> >Probably is a dumb question. But i've not found an answer on wiki...
> no dumb question!
;-)
> >There's a way to 'run' a VM for non intel/amd CPUs? And manage it via
> >PVE interface?
> No currently not, our KVM/QEMU package gets compiled
Hi there,
is it possible to change easily the LVMs in LXC?
So someone create a LCX Node on our Backup-Space and I want to migrate it back
to our Storage-System.
In KVM this seems easy by just clicking arround but in LXC it seems not
supported yet :-(
Cheers
Daniel
Hi Marco,
On 10/04/2016 04:59 PM, Marco Gaiarin wrote:
>
> I prefere to install debian jessie, and then ''upgrade'' to proxmox,
> following:
>
> https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
>
>
> looking at my servers/log, i've noticed that both 'ntpd' and
>
Mandi! Alwin Antreich
In chel di` si favelave...
> Only one is needed and on PVE 4 the default is timesyncd.
OK, i've removed 'ntp' in 'apt-get install' command on the wiki page.
Thanks.
--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La
Hi Lindsay,
On 10/03/2016 11:59 PM, Lindsay Mathieson wrote:
> Is it straightforward to setup cache tiering under Proxmox these days? last
> time I checked (several years ago) it was
> quite tricky with the crush rule setup and keeping the integration with the
> proxmox web ui.
Sadly I can't
16 matches
Mail list logo