[PVE-User] ProxMox 4.2 SMART on HP RAID

2016-10-04 Thread Hexis
Does anyone here use an HP RAID (HP P410, etc.) with ProxMox and monitor 
the SMART status on the drives using smartctl or another method? I want 
to monitory my server via SNMP for drive health and I am attempting to 
determine the best way to do this.


Thanks,

-Hexis

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Storage migration issue with thin provisionning SAN storage

2016-10-04 Thread Dhaussy Alexandre
Thanks for pointing this out.

I'll try offline migration and see how it behaves.


Le 04/10/2016 à 12:59, Alexandre DERUMIER a écrit :
> But If you do the migration offline, it should work. (don't known if you can 
> stop your vms during the migration)
>
> - Mail original -
> De: "aderumier" 
> À: "proxmoxve" 
> Envoyé: Mardi 4 Octobre 2016 12:58:32
> Objet: Re: [PVE-User] Storage migration issue with thin provisionning SAN 
> storage
>
> Hi,
> the limitation come from nfs. (proxmox use correctly detect-zeroes, but nfs 
> protocol have limitations, I think it'll be fixed in nfs 4.2)
>
> I have same problem when I migrate from nfs to ceph.
>
> I'm using discard/triming to retrieve space after migration.
>
> - Mail original -
> De: "Dhaussy Alexandre" 
> À: "proxmoxve" 
> Envoyé: Mardi 4 Octobre 2016 10:34:07
> Objet: Re: [PVE-User] Storage migration issue with thin provisionning SAN 
> storage
>
> Hello Brian,
>
> Thanks for the tip, it may be my last chance solution..
>
> Fortunatly i kept all original disk files on a NFS share, so i 'm able
> to rollback and re-do the migration...if manage to make qemu mirroring
> work with sparse vmdks..
>
>
> Le 03/10/2016 à 21:11, Brian :: a écrit :
>> Hi Alexandre,
>>
>> If guests are linux you could try use the scsi driver with discard enabled
>>
>> fstrim -v / then may make the unused space free on the underlying FS then.
>>
>> I don't use LVM but this certainly works with other types of storage..
>>
>>
>>
>>
>>
>> On Mon, Oct 3, 2016 at 5:14 PM, Dhaussy Alexandre
>>  wrote:
>>> Hello,
>>>
>>> I'm actually migrating more than 1000 Vms from VMware to proxmox, but i'm 
>>> hitting a major issue with storage migrations..
>>> Actually i'm migrating from datastores VMFS to NFS on VMWare, then from NFS 
>>> to LVM on Proxmox.
>>>
>>> LVMs on Proxmox are on top thin provisionned (FC SAN) LUNs.
>>> Thin provisionning works fine on Proxmox newly created VMs.
>>>
>>> But, i just discovered that when using qm move_disk to migrate from NFS to 
>>> LVM, it actually allocates all blocks of data !
>>> It's a huge problem for me and clearly a nogo... as the SAN storage arrays 
>>> are filling up very quickly !
>>>
>>> After further investigations, in qemu & proxmox... I found in proxmox code 
>>> that qemu_drive_mirror is called with those arguments :
>>>
>>> (In /usr/share/perl5/PVE/QemuServer.pm)
>>>
>>> 5640 sub qemu_drive_mirror {
>>> ...
>>> 5654 my $opts = { timeout => 10, device => "drive-$drive", mode => 
>>> "existing", sync => "full", target => $qemu_target };
>>>
>>> If i'm not wrong, Qemu supports "detect-zeroes" flag for mirroring block 
>>> targets, but proxmox does not use it.
>>> Is there any reason why this flag is not enabled during qemu drive 
>>> mirroring ??
>>>
>>> Cheers,
>>> Alexandre.
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 4.2 : Online Migration Successful but VM stops

2016-10-04 Thread Kevin Lemonnier
> 
> pve-qemu-kvm (2.6.1-2) unstable; urgency=medium
>* virtio related live migration fixes

Ha ! That's very interesting, maybe that would fix my problem
of moving VM disks to new storages shutting down the VM.

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 4.2 : Online Migration Successful but VM stops

2016-10-04 Thread Dhaussy Alexandre

> Now about migration, maybe is it a qemu bug, but I never hit it.
>
> Do you have the same problem without HA enabled ?
> Can you reproduce it 100% ?
>
Yes, 100% when memory hotplug enabled.
Besides i found an interesting update to qemu-kvm, because i'm using 
version 2.6-1 on all nodes :

pve-qemu-kvm (2.6.1-2) unstable; urgency=medium
   * virtio related live migration fixes
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ntp(d) or systemd-timesyncd?

2016-10-04 Thread Alwin Antreich
Hi Marco,

On 10/04/2016 05:43 PM, Marco Gaiarin wrote:
> Mandi! Alwin Antreich
>   In chel di` si favelave...
> 
>> Only one is needed and on PVE 4 the default is timesyncd.
> 
> OK, i've removed 'ntp' in 'apt-get install' command on the wiki page.

As a remark, you need to configure /etc/systemd/timesyncd.conf with a
time source and restart the service systemd-timesyncd.

> 
> 
> Thanks.
> 

---
Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Non-x86 VMs?

2016-10-04 Thread Marco Gaiarin

Probably is a dumb question. But i've not found an answer on wiki...


There's a way to 'run' a VM for non intel/amd CPUs? And manage it via
PVE interface?


I've an old SUN UltraSparc5 laying around, that i need to keep...

-- 
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''  http://www.lanostrafamiglia.it/
  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/25/index.php/component/k2/item/123
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph Cache tiering

2016-10-04 Thread gauthierl
Hi,

We was interested by setup a SSD cache tiering for our RBD storage but in the 
CEPH documentation, it seems to not recommend it :
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/?highlight=tier#known-bad-workloads

Has any one have experience with it ?

Mathieu

- Original Message -
From: "Alwin Antreich" 
To: pve-user@pve.proxmox.com
Sent: Tuesday, 4 October, 2016 08:51:12
Subject: Re: [PVE-User] Ceph Cache tiering

Hi Lindsay,

On 10/03/2016 11:59 PM, Lindsay Mathieson wrote:
> Is it straightforward to setup cache tiering under Proxmox these days? last 
> time I checked (several years ago) it was
> quite tricky with the crush rule setup and keeping the integration with the 
> proxmox web ui.

Sadly I can't answer this one.

> 
> 
> Also do the proxmox ceph tools work with SSD partions?? agian, last time I 
> checked they didn't.

We use SSDs for OSD journals and the pveceph tool is working well with it, also 
from the GUI.

> 
> 

-- 
Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Non-x86 VMs?

2016-10-04 Thread Thomas Lamprecht

Hi,

On 10/04/2016 09:30 AM, Marco Gaiarin wrote:

Probably is a dumb question. But i've not found an answer on wiki...


no dumb question!



There's a way to 'run' a VM for non intel/amd CPUs? And manage it via
PVE interface?



No currently not, our KVM/QEMU package gets compiled with the x86_64 
target only.


You would have to recompile our package (pve-qemu-kvm) with the 
respective targets included.
And start the VM with KVM disabled. I don't know though 100% that there 
is nothing else to change.


If you do this expirment it would be nice if you can get back to the 
list with the results :)


If you have questions how to add the targets to the package I can try 
assisting you, I do not have time for this experiment myself atm, sorry :)



cheers,
Thomas


I've an old SUN UltraSparc5 laying around, that i need to keep...




___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Storage migration issue with thin provisionning SAN storage

2016-10-04 Thread Dhaussy Alexandre
Hello Brian,

Thanks for the tip, it may be my last chance solution..

Fortunatly i kept all original disk files on a NFS share, so i 'm able 
to rollback and re-do the migration...if manage to make qemu mirroring 
work with sparse vmdks..


Le 03/10/2016 à 21:11, Brian :: a écrit :
> Hi Alexandre,
>
> If guests are linux you could try use the scsi driver with discard enabled
>
> fstrim -v / then may make the unused space free on the underlying FS then.
>
> I don't use LVM but this certainly works with other types of storage..
>
>
>
>
>
> On Mon, Oct 3, 2016 at 5:14 PM, Dhaussy Alexandre
>  wrote:
>> Hello,
>>
>> I'm actually migrating more than 1000 Vms from VMware to proxmox, but i'm 
>> hitting a major issue with storage migrations..
>> Actually i'm migrating from datastores VMFS to NFS on VMWare, then from NFS 
>> to LVM on Proxmox.
>>
>> LVMs on Proxmox are on top thin provisionned (FC SAN) LUNs.
>> Thin provisionning works fine on Proxmox newly created VMs.
>>
>> But, i just discovered that when using qm move_disk to migrate from NFS to 
>> LVM, it actually allocates all blocks of data !
>> It's a huge problem for me and clearly a nogo... as the SAN storage arrays 
>> are filling up very quickly !
>>
>> After further investigations, in qemu & proxmox... I found in proxmox code 
>> that qemu_drive_mirror is called with those arguments :
>>
>> (In /usr/share/perl5/PVE/QemuServer.pm)
>>
>> 5640 sub qemu_drive_mirror {
>> ...
>> 5654 my $opts = { timeout => 10, device => "drive-$drive", mode => 
>> "existing", sync => "full", target => $qemu_target };
>>
>> If i'm not wrong, Qemu supports "detect-zeroes" flag for mirroring block 
>> targets, but proxmox does not use it.
>> Is there any reason why this flag is not enabled during qemu drive mirroring 
>> ??
>>
>> Cheers,
>> Alexandre.
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Manager Skins/Themes

2016-10-04 Thread John Crisp
On 03/10/16 21:12, Brian :: wrote:
> Jeasuss - someone got out of the bed on the wrong side today!
> 

:-)

> I've just been working on something that had had me stuck in the 4.3
> UI for the past 48 hours on and off.
> Personally I like it  but thats just my opinion - and I did give the
> guys feedback after I upgraded.
> 

We are all different - hence my posing the question as to whether a
theme manager may be a good idea so we can modify the look and feel to
suit ourselves. Imposed ideas are not always a good thing (think TIFKAM !)

An option to test radical GUI changes (and this is) should be available,
with opportunities for feedback (I didn't see any)

> I'm sure some things can be improved but certainly my opinion isn't
> anywhere near as negative as yours.
> 
> 

I never liked the colours - the green is absolutely awful (and there are
plenty of others who have said the same) but I could live with it.

The double vertical is just insane - I can't think of anywhere else that
uses it. It just grates every time I look at it. There is no separation
from the first column.

We are all used to a vertical column left, and a horizontal row, in
multiple programs (think say email programs, file managers, and just
about anything else).

With wide screens there is no need to try and cram everything to the left.

If double columns were a good GUI choice you would see them everywhere,
but you don't, which means GUI designers the world over consider them a
bad design and do not use them.

Anyway, I guess I am in a minority of one and will be ignored.

B. Rgds
John



signature.asc
Description: OpenPGP digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Storage migration issue with thin provisionning SAN storage

2016-10-04 Thread Alexandre DERUMIER
Hi,
the limitation come from nfs. (proxmox use correctly detect-zeroes, but nfs 
protocol have limitations, I think it'll be fixed in nfs 4.2)

I have same problem when I migrate from nfs to ceph.

I'm using discard/triming to retrieve space after migration.

- Mail original -
De: "Dhaussy Alexandre" 
À: "proxmoxve" 
Envoyé: Mardi 4 Octobre 2016 10:34:07
Objet: Re: [PVE-User] Storage migration issue with thin provisionning SAN 
storage

Hello Brian, 

Thanks for the tip, it may be my last chance solution.. 

Fortunatly i kept all original disk files on a NFS share, so i 'm able 
to rollback and re-do the migration...if manage to make qemu mirroring 
work with sparse vmdks.. 


Le 03/10/2016 à 21:11, Brian :: a écrit : 
> Hi Alexandre, 
> 
> If guests are linux you could try use the scsi driver with discard enabled 
> 
> fstrim -v / then may make the unused space free on the underlying FS then. 
> 
> I don't use LVM but this certainly works with other types of storage.. 
> 
> 
> 
> 
> 
> On Mon, Oct 3, 2016 at 5:14 PM, Dhaussy Alexandre 
>  wrote: 
>> Hello, 
>> 
>> I'm actually migrating more than 1000 Vms from VMware to proxmox, but i'm 
>> hitting a major issue with storage migrations.. 
>> Actually i'm migrating from datastores VMFS to NFS on VMWare, then from NFS 
>> to LVM on Proxmox. 
>> 
>> LVMs on Proxmox are on top thin provisionned (FC SAN) LUNs. 
>> Thin provisionning works fine on Proxmox newly created VMs. 
>> 
>> But, i just discovered that when using qm move_disk to migrate from NFS to 
>> LVM, it actually allocates all blocks of data ! 
>> It's a huge problem for me and clearly a nogo... as the SAN storage arrays 
>> are filling up very quickly ! 
>> 
>> After further investigations, in qemu & proxmox... I found in proxmox code 
>> that qemu_drive_mirror is called with those arguments : 
>> 
>> (In /usr/share/perl5/PVE/QemuServer.pm) 
>> 
>> 5640 sub qemu_drive_mirror { 
>> ... 
>> 5654 my $opts = { timeout => 10, device => "drive-$drive", mode => 
>> "existing", sync => "full", target => $qemu_target }; 
>> 
>> If i'm not wrong, Qemu supports "detect-zeroes" flag for mirroring block 
>> targets, but proxmox does not use it. 
>> Is there any reason why this flag is not enabled during qemu drive mirroring 
>> ?? 
>> 
>> Cheers, 
>> Alexandre. 
>> ___ 
>> pve-user mailing list 
>> pve-user@pve.proxmox.com 
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Non-x86 VMs?

2016-10-04 Thread Marco Gaiarin
Mandi! Thomas Lamprecht
  In chel di` si favelave...

> >Probably is a dumb question. But i've not found an answer on wiki...
> no dumb question!

;-)


> >There's a way to 'run' a VM for non intel/amd CPUs? And manage it via
> >PVE interface?
> No currently not, our KVM/QEMU package gets compiled with the x86_64
> target only.

OK.

> You would have to recompile our package (pve-qemu-kvm) with the
> respective targets included.
> And start the VM with KVM disabled. I don't know though 100% that
> there is nothing else to change.
> If you do this expirment it would be nice if you can get back to the
> list with the results :)
> If you have questions how to add the targets to the package I can
> try assisting you, I do not have time for this experiment myself
> atm, sorry :)

Probably i can do some tests, but now i'm too so busy... and i miss now
a proxmox test environment. ;)


Anyway, i'm happy to know that is (probably) doable...

-- 
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''  http://www.lanostrafamiglia.it/
  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/25/index.php/component/k2/item/123
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] LCX Migration to another Storage

2016-10-04 Thread Daniel
Hi there,

is it possible to change easily the LVMs in LXC?
So someone create a LCX Node on our Backup-Space and I want to migrate it back 
to our Storage-System.
In KVM this seems easy by just clicking arround but in LXC it seems not 
supported yet :-(

Cheers


Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ntp(d) or systemd-timesyncd?

2016-10-04 Thread Alwin Antreich
Hi Marco,

On 10/04/2016 04:59 PM, Marco Gaiarin wrote:
> 
> I prefere to install debian jessie, and then ''upgrade'' to proxmox,
> following:
> 
>   https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
> 
> 
> looking at my servers/log, i've noticed that both 'ntpd' and
> 'systemd-timesyncd' are active.
> 
> Considering that:
> 
> 1) 'systemd-timesyncd' in debian jessie is normally disabled, so seems
>  that is proxmox that enable it
> 
> 2) 'systemd-timesyncd' it is a reasonable choice, there's no reason to
>  have a proxmox node that is also a NTP server...
> 
> 3) probably i've installed ntp myself, doing cut and paste from:
> 
>   
> https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie#Install_Proxmox_VE_packages
> 
>  eg:
>   apt-get install proxmox-ve ntp ssh postfix ksm-control-daemon 
> open-iscsi systemd-sysv
> 
> 
> probably the 'apt-get install' row was taken from previous, non-systemd
> debian version, and can be removed.
> 
> Or both are needed?

Only one is needed and on PVE 4 the default is timesyncd.

> 
> 
> Thanks.
> 

-- 
Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ntp(d) or systemd-timesyncd?

2016-10-04 Thread Marco Gaiarin
Mandi! Alwin Antreich
  In chel di` si favelave...

> Only one is needed and on PVE 4 the default is timesyncd.

OK, i've removed 'ntp' in 'apt-get install' command on the wiki page.


Thanks.

-- 
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''  http://www.lanostrafamiglia.it/
  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/25/index.php/component/k2/item/123
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph Cache tiering

2016-10-04 Thread Alwin Antreich
Hi Lindsay,

On 10/03/2016 11:59 PM, Lindsay Mathieson wrote:
> Is it straightforward to setup cache tiering under Proxmox these days? last 
> time I checked (several years ago) it was
> quite tricky with the crush rule setup and keeping the integration with the 
> proxmox web ui.

Sadly I can't answer this one.

> 
> 
> Also do the proxmox ceph tools work with SSD partions?? agian, last time I 
> checked they didn't.

We use SSDs for OSD journals and the pveceph tool is working well with it, also 
from the GUI.

> 
> 

-- 
Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user