Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Lindsay Mathieson
On 10/10/2016 8:19 PM, Brian :: wrote: I think with clusters with VM type workload and at the scale that proxmox users tend to build < 20 OSD servers that cache tier is adding layer of complexity that isn't going to payback. If you want decent IOPS / throughput at this scale with Ceph no

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Eneko Lacunza
Hi, El 10/10/16 a las 13:46, Lindsay Mathieson escribió: On 10/10/2016 8:19 PM, Brian :: wrote: I think with clusters with VM type workload and at the scale that proxmox users tend to build < 20 OSD servers that cache tier is adding layer of complexity that isn't going to payback. If you want

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Lindsay Mathieson
On 10/10/2016 10:22 PM, Eneko Lacunza wrote: But this is nonsense, ZFS backed Ceph?! You're supposed to give full disks to ceph, so that performance increases as you add more disks I've tried it both ways, the performance is much the same. ZFS also increases in performance the more disks you

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Adam Thompson
The default PVE setup puts an XFS filesystem onto each "full disk" assigned to CEPH. CEPH does **not** write directly to raw devices, so the choice of filesystem is largely irrelevant. Granted, ZFS is a "heavier" filesystem than XFS, but it's no better or worse than running CEPH on XFS on

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Lindsay Mathieson
On 11/10/2016 2:05 AM, Eneko Lacunza wrote: The choice of filesystem is not "largely irrelevant"; filesystems are quite complex and the choice is relevant. With ZFS, you're in unknown territory AFAIK as it is not regularly tested in ceph development; I think only ext4 and XFS are regularly

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Yannis Milios
>>...but there is one deal breaker for us and thats snapshots - they are incredibly >> slow to restore. You can try to clone instead of rolling back an image to the snapshot. It's much faster and the recommended method by official Ceph documentation.

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Lindsay Mathieson
On 11/10/2016 6:28 AM, Yannis Milios wrote: You can try to clone instead of rolling back an image to the snapshot. It's much faster and the recommended method by official Ceph documentation. Not integrated with Proxmox/ -- Lindsay Mathieson ___

Re: [PVE-User] ceph - slow rbd ls -l

2016-10-10 Thread Thiago Damas
2016-10-10 20:47 GMT-03:00 Lindsay Mathieson : > On 11/10/2016 7:59 AM, Thiago Damas wrote: > >> I'm experiencing some timeouts when creating new disks/VMs, using a ceph >> storage. >>Is there some way to reduce the long listing of rbd ls, ie "rbd ls -l"? >> > >

Re: [PVE-User] P2V Windows2003 Server (AD)

2016-10-10 Thread HK 590 Dicky 梁家棋 資科
oh.. so useful, thx a lot Best Regards, 資訊科技部 系統管理員 梁家棋 Dicky Leung System Administrator IT Department 電話: (852) 24428990 傳真: (852) 24757095 電郵: 5...@hshcl.com 網址: www.hshcl.com DISCLAIMER:- http://www.hshcl.com/disclaimer.htm Jean R. Franco 於 07/10/2016

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Eneko Lacunza
Hi, The choice of filesystem is not "largely irrelevant"; filesystems are quite complex and the choice is relevant. With ZFS, you're in unknown territory AFAIK as it is not regularly tested in ceph development; I think only ext4 and XFS are regularly tested. And there are known

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Kautz, Valentin
I can only acknowledge this. Speed went down as the cache tier went full. Even with superfast PCIe SSDs. Additionally i think it is error prone. I ran into the problem, that a ssd stuck because it was full causing the complete storage to stuck. There is a parameter for the maximum size of the

Re: [PVE-User] ceph - slow rbd ls -l

2016-10-10 Thread Lindsay Mathieson
On 11/10/2016 7:59 AM, Thiago Damas wrote: I'm experiencing some timeouts when creating new disks/VMs, using a ceph storage. Is there some way to reduce the long listing of rbd ls, ie "rbd ls -l"? Hows your "ceph -s" look? Are the logs showing any particular OSD's as being slow to

Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Brian ::
Hi Lindsay I think with clusters with VM type workload and at the scale that proxmox users tend to build < 20 OSD servers that cache tier is adding layer of complexity that isn't going to payback. If you want decent IOPS / throughput at this scale with Ceph no spinning rust allowed anywhere :)