Hi,

>>Ceph has rather larger overheads

Agree. they are overhead, but performance increase with each release.
I think the biggest problem is that you can reach more than 70-90k iops with 1 
vm disk currently.
and maybe latency could be improve too.

>>much bigger PITA to admin
don't agree. I'm running 5 ceph cluster (around 200TB ssd). almost 0 
maintenance.

>>does not perform as well on whitebox hardware 
define whitebox hardware ?  
the only thing is to no use consumer ssd (because they sucks with direct io)


>>an Ceph run multiple replication and ec levels for different files on the 
>>same volume?
you can manage it by pool. (as block storage).
>> Near instantaneous spanshots 
yes

>>and restores 

a lit bit slower to rollback

>>at any level of the filesystem you choose? 

why are you talking about filesystem ? Are you mounting lizard inside your vm? 
or for hosting vm disk ?


>>Run on commodity hardware with trivial adding of disks as required? 
yes
----- Mail original -----
De: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
À: "proxmoxve" <pve-user@pve.proxmox.com>
Envoyé: Vendredi 30 Mars 2018 05:05:11
Objet: Re: [PVE-User] Custom storage in ProxMox 5

Ceph has rather larger overheads, much bigger PITA to admin, does not perform 
as well on whitebox hardware – in fact the Ceph crowd std reply to issues is to 
spend big on enterprise hardware and is far less flexible. 

Can Ceph run multiple replication and ec levels for different files on the same 
volume? Can you change goal settings on the fly? Near instantaneous spanshots 
and restores at any level of the filesystem you choose? Run on commodity 
hardware with trivial adding of disks as required? 



ZFS is not a distributed filesystem, so don’t know why you bring it up. Though 
I am using ZFS as the underlying filesystem. 



-- 
Lindsay Mathieson 



________________________________ 
From: pve-user <pve-user-boun...@pve.proxmox.com> on behalf of 
ad...@extremeshok.com <ad...@extremeshok.com> 
Sent: Friday, March 30, 2018 11:53:10 AM 
To: pve-user@pve.proxmox.com 
Subject: Re: [PVE-User] Custom storage in ProxMox 5 

Don't waste your time with lizardfs. 

Proxmox 5+ has proper Ceph and ZFS support. 

Ceph does everything and more, and ZFS is about the de-facto container 
storage medium. 


On 03/30/2018 03:20 AM, Lindsay Mathieson wrote: 
> I was working on a custom storage plugin (lizardfs) for VE 4.x, looking to 
> revisit it. Has the API changed much (or at all) for PX 5? Is there any 
> documentation for it? 
> 
> Thanks, 
> 
> -- 
> Lindsay Mathieson 
> 
> _______________________________________________ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

_______________________________________________ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
_______________________________________________ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to