That makes sense - the ZIL acts as a writeback cache for you. If the latency
between QEMU and ZFS were higher (e.g. across NFS, maybe across a router, too)
then you'd likely be able to measure a small difference.
If the latencies are all small enough, any single writeback cache an will
likely
On 12/10/2016 9:55 PM, Adam Thompson wrote:
Ultimately, "Always Use DirectIO" is a religious belief, not a technically
sound belief. There are situations where it makes sense, and other situations where it
doesn't. The same can be said of*every single* setting - if there were only One True
lace!
-Adam
> -Original Message-
> From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On
> Behalf Of Lindsay Mathieson
> Sent: October 11, 2016 23:12
> To: PVE User List <pve-user@pve.proxmox.com>
> Subject: Re: [PVE-User] Ceph Cache Tiering
>
> On 12 October 20
On 12 October 2016 at 13:28, Adam Thompson wrote:
> Not a bloody chance... WriteBack is the only thing that gives both acceptable
> performance characteristics and data guarantees.
Eh? I didn't think writeback gave data guarantees, quite the opposite.
--
Lindsay
of a complete
system failure.)
-Adam
> -Original Message-
> From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On
> Behalf Of Emmanuel Kasper
> Sent: October 11, 2016 05:26
> To: PVE User List <pve-user@pve.proxmox.com>
> Subject: Re: [PVE-User] Ceph Cache Tieri
On 11/10/2016 6:39 PM, Eneko Lacunza wrote:
I linked the bugreport, not that I care really :)
If you read through the bug report it turns out to be:
a). an xattr issue, not filename size
b). A bug in ceph which assumes that the zfs max xattr size is far
smaller than it is.
They're not
On 11/10/2016 8:26 PM, Emmanuel Kasper wrote:
ut of Curiosity, I suppose you're using the default 'NoCache' as the
cache mode of those QCOW2 images ?
Not on ZFS, which doesn't support O_DIRECT, you need to use writethrough
or writeback.
--
Lindsay Mathieson
On 10/10/2016 04:29 PM, Adam Thompson wrote:
> The default PVE setup puts an XFS filesystem onto each "full disk" assigned
> to CEPH. CEPH does **not** write directly to raw devices, so the choice of
> filesystem is largely irrelevant.
> Granted, ZFS is a "heavier" filesystem than XFS, but it's
El 10/10/16 a las 23:24, Lindsay Mathieson escribió:
On 11/10/2016 2:05 AM, Eneko Lacunza wrote:
And there are known limits/problems with ext4 for example, and seems
that also apply to zfsonlinux
No they do not.
I linked the bugreport, not that I care really :)
Anyway it seems they're
On 11/10/2016 2:05 AM, Eneko Lacunza wrote:
The choice of filesystem is not "largely irrelevant"; filesystems are
quite complex and the choice is relevant. With ZFS, you're in unknown
territory AFAIK as it is not regularly tested in ceph development; I
think only ext4 and XFS are regularly
On 11/10/2016 6:28 AM, Yannis Milios wrote:
You can try to clone instead of rolling back an image to the snapshot. It's
much faster and the recommended method by official Ceph documentation.
Not integrated with Proxmox/
--
Lindsay Mathieson
___
>>...but there is one deal breaker for us and thats snapshots - they are
incredibly >> slow to restore.
You can try to clone instead of rolling back an image to the snapshot. It's
much faster and the recommended method by official Ceph documentation.
NFS fileserver, and run QCOW2 files
over NFS. Performs better than CEPH did on 1Gbps infrastructure.
-Adam
-Original Message-
From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On
Behalf Of Lindsay Mathieson
Sent: October 10, 2016 09:21
To: pve-user@pve.proxmox.com
Subject: Re:
n
> Sent: October 10, 2016 09:21
> To: pve-user@pve.proxmox.com
> Subject: Re: [PVE-User] Ceph Cache Tiering
>
> On 10/10/2016 10:22 PM, Eneko Lacunza wrote:
> > But this is nonsense, ZFS backed Ceph?! You're supposed to give full
> > disks to ceph, so that performance
On 10/10/2016 10:22 PM, Eneko Lacunza wrote:
But this is nonsense, ZFS backed Ceph?! You're supposed to give full
disks to ceph, so that performance increases as you add more disks
I've tried it both ways, the performance is much the same. ZFS also
increases in performance the more disks you
Hi,
El 10/10/16 a las 13:46, Lindsay Mathieson escribió:
On 10/10/2016 8:19 PM, Brian :: wrote:
I think with clusters with VM type workload and at the scale that
proxmox users tend to build < 20 OSD servers that cache tier is adding
layer of complexity that isn't going to payback. If you want
On 10/10/2016 8:19 PM, Brian :: wrote:
I think with clusters with VM type workload and at the scale that
proxmox users tend to build < 20 OSD servers that cache tier is adding
layer of complexity that isn't going to payback. If you want decent
IOPS / throughput at this scale with Ceph no
Hi Lindsay
I think with clusters with VM type workload and at the scale that
proxmox users tend to build < 20 OSD servers that cache tier is adding
layer of complexity that isn't going to payback. If you want decent
IOPS / throughput at this scale with Ceph no spinning rust allowed
anywhere :)
was smaller than
the maximum cache size.
Kind Regards
Valentin
Von: pve-user [pve-user-boun...@pve.proxmox.com] im Auftrag von
Lindsay Mathieson [lindsay.mathie...@gmail.com]
Gesendet: Sonntag, 9. Oktober 2016 00:21
An: PVE User List
Betreff: Re: [PVE-User] Ceph
On 9/10/2016 7:45 AM, Lindsay Mathieson wrote:
cache tiering was limited and a poor fit for VM Hosting, generally the
performance was with it
"was *worse* with it"
:)
--
Lindsay Mathieson
___
pve-user mailing list
pve-user@pve.proxmox.com
Message -
From: "Alwin Antreich" <sysadmin-...@cognitec.com>
To: pve-user@pve.proxmox.com
Sent: Tuesday, 4 October, 2016 08:51:12
Subject: Re: [PVE-User] Ceph Cache tiering
Hi Lindsay,
On 10/03/2016 11:59 PM, Lindsay Mathieson wrote:
> Is it straightforward to setup
Hi Lindsay,
On 10/03/2016 11:59 PM, Lindsay Mathieson wrote:
> Is it straightforward to setup cache tiering under Proxmox these days? last
> time I checked (several years ago) it was
> quite tricky with the crush rule setup and keeping the integration with the
> proxmox web ui.
Sadly I can't
A lot of users on ceph mailing have reported problem with samsung evo drives,
mainly because they are pretty slow for O_DSYNC writes
See this for benching them.
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
- Mail original -
23 matches
Mail list logo