On 11-1-2017 08:06, Adrian Saul wrote:
>
> I would concur having spent a lot of time on ZFS on Solaris.
>
> ZIL will reduce the fragmentation problem a lot (because it is not
> doing intent logging into the filesystem itself which fragments the
> block allocations) and write response will be a
I would concur having spent a lot of time on ZFS on Solaris.
ZIL will reduce the fragmentation problem a lot (because it is not doing intent
logging into the filesystem itself which fragments the block allocations) and
write response will be a lot better. I would use different devices for
Hello Kevin,
On Tue, Jan 10, 2017 at 4:21 PM, Kevin Olbrich wrote:
> 5x Ceph node equipped with 32GB RAM, Intel i5, Intel DC P3700 NVMe journal,
Is the "journal" used as a ZIL?
> We experienced a lot of io blocks (X requests blocked > 32 sec) when a lot
> of data is changed in
On 11/01/2017 7:21 AM, Kevin Olbrich wrote:
Read-Cache using normal Samsung PRO SSDs works very well
How did you implement the cache and measure the results?
a ZFS ssd cache will perform very badly with VM hosting and/or
distriibuted filesystems, the random nature of the I/O and the ARC
Dear Ceph-users,
just to make sure nobody makes the mistake, I would like to share my
experience with Ceph on ZFS in our test lab.
ZFS is a Copy-on-Write filesystem and is suitable IMHO where data
resilience has high priority.
I work for a mid-sized datacenter in Germany and we set up a cluster