Re: [ceph-users] Spec for Ceph Mon+Mgr?

2019-01-31 Thread Jesper Krogh


> : We're currently co-locating our mons with the head node of our Hadoop
> : installation. That may be giving us some problems, we dont know yet, but
> : thus I'm speculation about moving them to dedicated hardware.

Would it be ok to run them on kvm VM’s - of course not backed by ceph?

Jesper
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-25 Thread Jesper Krogh
On 25 Nov 2018, at 15.17, Vitaliy Filippov  wrote:
> 
> All disks (HDDs and SSDs) have cache and may lose non-transactional writes 
> that are in-flight. However, any adequate disk handles fsync's (i.e SATA 
> FLUSH CACHE commands). So transactional writes should never be lost, and in 
> Ceph ALL writes are transactional - Ceph issues fsync's all the time. Another 
> example is DBMS-es - they also issue an fsync when you COMMIT.

https://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf

This may have changed since 2013, normal understanding is that cache need to be 
disabled to ensure that flushed are persistent, and disabling cache in ssd is 
either not adhered to by firmware or plummeting the write performance.

Which is why enterprise discs had power loss protection in terms of capacitors.

again any links/info telling otherwise is very welcome

Jesper
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-24 Thread Jesper Krogh


> On 24 Nov 2018, at 18.09, Anton Aleksandrov  wrote
> We plan to have data on dedicate disk in each node and my question is about 
> WAL/DB for Bluestore. How bad would it be to place it on system-consumer-SSD? 
> How big risk is it, that everything will get "slower than using spinning HDD 
> for the same purpose"? And how big risk is it, that our nodes will die, 
> because of SSD lifespan?

the real risk is the lack of power loss protection. Data can be corrupted on 
unflean shutdowns 

Disabling cache may help
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread Jesper Krogh
On 14 Oct 2018, at 15.26, John Hearns  wrote:
> 
> This is a general question for the ceph list.
> Should Jesper be looking at these vm tunables?
> vm.dirty_ratio
> vm.dirty_centisecs
> 
> What effect do they have when using Cephfs?

This situation is a read only, thus no dirty data in page cache. Above should 
be irrelevant. 

Jesper


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com