[ceph-users] Ceph balancer history and clarity

2018-11-18 Thread Marc Roos
- if my cluster is not well balanced, I have to run the balancer execute several times, because it only optimises in small steps? - is there some history of applied plans to see how optimizing brings down this reported final score 0.054781? - how can I get the current score? - I have some

Re: [ceph-users] Use SSDs for metadata or for a pool cache?

2018-11-18 Thread Marc Roos
- Everyone here will tell you not to use 2x replica, maybe use some erasure code if you want to save space. - I cannot say anything about applying the cache pool, did not use it, read some things that made me doubt it was useful for us. We decided to put some vm's on ssd rbd pool. Maybe when

Re: [ceph-users] Huge latency spikes

2018-11-18 Thread Serkan Çoban
I am not saying controller cache, you should check ssd disk caches. On Sun, Nov 18, 2018 at 11:40 AM Alex Litvak wrote: > > All 3 nodes have this status for SSD mirror. Controller cache is on for all > 3. > > Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad > BBU >

Re: [ceph-users] Huge latency spikes

2018-11-18 Thread Alex Litvak
Hmm, On all nodes hdparm -W /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0d 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 write-caching = not supported On 11/18/2018 10:30 AM, Ashley Merrick wrote: hdparm -W /dev/xxx should show

[ceph-users] openstack swift multitenancy problems with ceph RGW

2018-11-18 Thread Dilip Renkila
Hi all, We are provisioning openstack swift api though ceph rgw (mimic). We have problems when trying to create two containers in two projects of same name. After scraping web, i came to know that i have to enable * rgw_keystone_implicit_tenants in ceph conf file. But no use. Is really

Re: [ceph-users] Huge latency spikes

2018-11-18 Thread Ashley Merrick
hdparm -W /dev/xxx should show you On Mon, 19 Nov 2018 at 12:28 AM, Alex Litvak wrote: > All machines state the same. > > /opt/MegaRAID/MegaCli/MegaCli64 -LDGetProp -DskCache -Lall -a0 > > Adapter 0-VD 0(target id: 0): Disk Write Cache : Disk's Default > Adapter 0-VD 1(target id: 1): Disk Write

Re: [ceph-users] Huge latency spikes

2018-11-18 Thread Alex Litvak
All machines state the same. /opt/MegaRAID/MegaCli/MegaCli64 -LDGetProp -DskCache -Lall -a0 Adapter 0-VD 0(target id: 0): Disk Write Cache : Disk's Default Adapter 0-VD 1(target id: 1): Disk Write Cache : Disk's Default I assume they are all on which is actually bad based on common sense.

Re: [ceph-users] Huge latency spikes

2018-11-18 Thread Ashley Merrick
Ah yes sorry be because your behind a raid card. Your need to check the raid config I know on a HP card for example you have an option called enabled disk cache. This is separate to enabling the raid card cache, the config should be per a drive (is on HP) so worth checking the config outputs for

[ceph-users] get cephfs mounting clients' infomation

2018-11-18 Thread Zhenshi Zhou
Hi, I have a cluster providing cephfs and it looks well. But as times goes by, more and more clients use it. I wanna write a script for getting the clients' informations so that I can keep everything in good order. I google a lot but dont find any solution which I can get clients information. Is

Re: [ceph-users] get cephfs mounting clients' infomation

2018-11-18 Thread Yan, Zheng
'ceph daemon mds.xx session ls' On Mon, Nov 19, 2018 at 2:40 PM Zhenshi Zhou wrote: > > Hi, > > I have a cluster providing cephfs and it looks well. But as times > goes by, more and more clients use it. I wanna write a script > for getting the clients' informations so that I can keep everything >

Re: [ceph-users] Fwd: what are the potential risks of mixed cluster and client ms_type

2018-11-18 Thread Piotr Dałek
On 2018-11-19 5:05 a.m., Honggang(Joseph) Yang wrote: hello, Our cluster side ms_type is async, while client side ms_type is simple. I want to know if this is a proper way to use, what are the potential risks? None if Ceph doesn't complain about async messenger being experimental - both

Re: [ceph-users] get cephfs mounting clients' infomation

2018-11-18 Thread Zhenshi Zhou
Many thanks Yan! This command can get IP, hostname, mounting point and kernel version. All of these data are exactly what I need. Besides, is there a way I can get the sub directory's usage other than the whole cephfs usage from the server. For instance, I have /docker, /kvm, /backup, etc. I

Re: [ceph-users] get cephfs mounting clients' infomation

2018-11-18 Thread Yan, Zheng
On Mon, Nov 19, 2018 at 3:06 PM Zhenshi Zhou wrote: > > Many thanks Yan! > > This command can get IP, hostname, mounting point and kernel version. All > of these data are exactly what I need. > Besides, is there a way I can get the sub directory's usage other than the > whole > cephfs usage from

Re: [ceph-users] Fwd: what are the potential risks of mixed cluster and client ms_type

2018-11-18 Thread Piotr Dałek
On 2018-11-19 8:17 a.m., Honggang(Joseph) Yang wrote: thank you. but I encountered a problem: https://tracker.ceph.com/issues/37300 I don't know if this is because of mix use of messger type. Have you done basic troubleshooting, like checking osd.179 networking? Usually this means firewall

Re: [ceph-users] read performance, separate client CRUSH maps or limit osd read access from each client

2018-11-18 Thread Konstantin Shalygin
On 11/17/18 1:07 AM, Vlad Kopylov wrote: This is what Jean suggested. I understand it and it works with primary. *But what I need is for all clients to access same files, not separate sets (like red blue green)* You should look to other solutions, like GlusterFS. Ceph is overhead for this