Re: [ceph-users] client with uid

2018-02-06 Thread Patrick Donnelly
On Mon, Feb 5, 2018 at 9:08 AM, Keane Wolter wrote: > Hi Patrick, > > Thanks for the info. Looking at the fuse options in the man page, I should > be able to pass "-o uid=$(id -u)" at the end of the ceph-fuse command. > However, when I do, it returns with an unknown option for

[ceph-users] OSD Segfaults after Bluestore conversion

2018-02-06 Thread Kyle Hutson
We had a 26-node production ceph cluster which we upgraded to Luminous a little over a month ago. I added a 27th-node with Bluestore and didn't have any issues, so I began converting the others, one at a time. The first two went off pretty smoothly, but the 3rd is doing something strange.

[ceph-users] object lifecycle scope

2018-02-06 Thread Robert Stanford
Hello Ceph users. Is object lifecycle (currently expiration) for rgw implementable on a per-object basis, or is the smallest scope the bucket? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] RBD device as SBD device for pacemaker cluster

2018-02-06 Thread Lars Marowsky-Bree
On 2018-02-06T13:00:59, Kai Wagner wrote: > I had the idea to use a RBD device as the SBD device for a pacemaker > cluster. So I don't have to fiddle with multipathing and all that stuff. > Have someone already tested this somewhere and can tell how the cluster > reacts on

Re: [ceph-users] High apply latency

2018-02-06 Thread Frédéric Nass
Hi Jakub, Le 06/02/2018 à 16:03, Jakub Jaszewski a écrit : ​Hi Frederic, I've not enable debug level logging on all OSDs, just on one for the test, need to double check that. But looks that merging is ongoing on few OSDs or OSDs are faulty, I will dig into that tomorrow. Write bandwidth is

Re: [ceph-users] High apply latency

2018-02-06 Thread Jakub Jaszewski
​Hi Frederic, I've not enable debug level logging on all OSDs, just on one for the test, need to double check that. But looks that merging is ongoing on few OSDs or OSDs are faulty, I will dig into that tomorrow. Write bandwidth is very random # rados bench -p default.rgw.buckets.data 120 write

Re: [ceph-users] RBD device as SBD device for pacemaker cluster

2018-02-06 Thread Wido den Hollander
On 02/06/2018 01:00 PM, Kai Wagner wrote: Hi all, I had the idea to use a RBD device as the SBD device for a pacemaker cluster. So I don't have to fiddle with multipathing and all that stuff. Have someone already tested this somewhere and can tell how the cluster reacts on this? I think this

[ceph-users] RBD device as SBD device for pacemaker cluster

2018-02-06 Thread Kai Wagner
Hi all, I had the idea to use a RBD device as the SBD device for a pacemaker cluster. So I don't have to fiddle with multipathing and all that stuff. Have someone already tested this somewhere and can tell how the cluster reacts on this? I think this shouldn't be problem, but I'm just wondering

Re: [ceph-users] Changing osd crush chooseleaf type at runtime

2018-02-06 Thread Flemming Frandsen
Ah! Right, I guess my actual question was: How does osd crush chooseleaf type = 0 and 1 alter the crushmap? By experimentation I've figured out that: "osd crush chooseleaf type = 0" turns into "step choose firstn 0 type osd" and "osd crush chooseleaf type = 1" turns into "step chooseleaf

[ceph-users] resolved - unusual growth in cluster after replacing journalSSDs

2018-02-06 Thread Jogi Hofmüller
Dear all, we finally found the reason for the unexpected growth in our cluster. The data was created by a collectd plugin [1] that measures latency by running rados bench once a minute. Since our cluster was stressed out for a while, removing the objects created by rados bench failed. We

[ceph-users] how to delete a cluster network

2018-02-06 Thread Александр Пивушков
Hello! My cluster uses two networks.   in ceph.conf there are two records: public_network = 10.53.8.0/24, cluster_network = 10.0.0.0/24   Servers and clients are connected to one switch.   To store data in ceph from clients, use cephfs: 10.53.8.141:6789,10.53.8.143:6789,10.53.8.144:6789:/ on /

Re: [ceph-users] Latency for the Public Network

2018-02-06 Thread Christian Balzer
Hello, On Tue, 6 Feb 2018 09:21:22 +0100 Tobias Kropf wrote: > On 02/06/2018 04:03 AM, Christian Balzer wrote: > > Hello, > > > > On Mon, 5 Feb 2018 22:04:00 +0100 Tobias Kropf wrote: > > > >> Hi ceph list, > >> > >> we have a hyperconvergent ceph cluster with kvm on 8 nodes with ceph > >>

Re: [ceph-users] Latency for the Public Network

2018-02-06 Thread Tobias Kropf
On 02/06/2018 04:03 AM, Christian Balzer wrote: > Hello, > > On Mon, 5 Feb 2018 22:04:00 +0100 Tobias Kropf wrote: > >> Hi ceph list, >> >> we have a hyperconvergent ceph cluster with kvm on 8 nodes with ceph >> hammer 0.94.10. > Do I smell Proxmox? Yes we use atm Proxmox > >> The cluster is

[ceph-users] Infinite loop in radosgw-usage show

2018-02-06 Thread Ingo Reimann
Just to add - We wrote a little wrapper, that reads the output of "radosgw-admin usage show" and stops, when the loop starts. When we add all entries by ourselves, the result is correct. Moreover - the duplicate timestamp, that we detect to break the loop, is not the last taken into account. Eg:

Re: [ceph-users] osd_recovery_max_chunk value

2018-02-06 Thread Christian Balzer
On Tue, 6 Feb 2018 13:27:22 +0530 Karun Josy wrote: > Hi Christian, > > Thank you for your help. > > Ceph version is 12.2.2. So is this value bad ? Do you have any suggestions ? > > > So to reduce the max chunk ,I assume I can choose something like > 7 << 20 ,ie 7340032 ? > More like 4MB to

Re: [ceph-users] osd_recovery_max_chunk value

2018-02-06 Thread Christian Balzer
On Tue, 6 Feb 2018 13:24:18 +0530 Karun Josy wrote: > Hi Christian, > > Thank you for your help. > > Ceph version is 12.2.2. So is this value bad ? Do you have any suggestions ? > That should be fine AFAIK, some (all?) versions for Jewel definitely are not. > > ceph tell osd.* injectargs