Re: [ceph-users] CephFS and many small files

2019-03-29 Thread Paul Emmerich
Are you running on HDDs? The minimum allocation size is 64kb by default here. You can control that via the parameter bluestore_min_alloc_size during OSD creation. 64 kb times 8 million files is 512 GB which is the amount of usable space you reported before running the test, so that seems to add

[ceph-users] Erasure Pools.

2019-03-29 Thread Andrew J. Hutton
I have tried to create erasure pools for CephFS using the examples given at https://swamireddy.wordpress.com/2016/01/26/ceph-diff-between-erasure-and-replicated-pool-type/ but this is resulting in some weird behaviour.  The only number in common is that when creating the metadata store; is

[ceph-users] Ceph block storage cluster limitations

2019-03-29 Thread Void Star Nill
Hello, I wanted to know if there are any max limitations on - Max number of Ceph data nodes - Max number of OSDs per data node - Global max on number of OSDs - Any limitations on the size of each drive managed by OSD? - Any limitation on number of client nodes? - Any limitation on maximum number

Re: [ceph-users] CephFS and many small files

2019-03-29 Thread Patrick Donnelly
Hi Jörn, On Fri, Mar 29, 2019 at 5:20 AM Clausen, Jörn wrote: > > Hi! > > In my ongoing quest to wrap my head around Ceph, I created a CephFS > (data and metadata pool with replicated size 3, 128 pgs each). What version? > When I > mount it on my test client, I see a usable space of ~500 GB,

[ceph-users] ceph-iscsi: (Config.lock) Timed out (30s) waiting for excl lock on gateway.conf object

2019-03-29 Thread Matthias Leopold
Hi, I upgraded my test Ceph iSCSI gateways to ceph-iscsi-3.0-6.g433bbaa.el7.noarch. I'm trying to use the new parameter "cluster_client_name", which - to me - sounds like I don't have to access the ceph cluster as "client.admin" anymore. I created a "client.iscsi" user and watched what

[ceph-users] Recommended fs to use with rbd

2019-03-29 Thread Marc Roos
I would like to use rbd image from replicated hdd pool in a libvirt/kvm vm. 1. What is the best filesystem to use with rbd, just standaard xfs? 2. Is there a recommended tuning for lvm on how to put multiple rbd images? ___ ceph-users mailing

Re: [ceph-users] Bluestore WAL/DB decisions

2019-03-29 Thread Erik McCormick
On Fri, Mar 29, 2019 at 1:48 AM Christian Balzer wrote: > > On Fri, 29 Mar 2019 01:22:06 -0400 Erik McCormick wrote: > > > Hello all, > > > > Having dug through the documentation and reading mailing list threads > > until my eyes rolled back in my head, I am left with a conundrum > > still. Do I

Re: [ceph-users] CEPH OSD Restarts taking too long v10.2.9

2019-03-29 Thread Nikhil R
The issue we have is large leveldb's . do we have any setting to disable compaction of leveldb on osd start? in.linkedin.com/in/nikhilravindra On Fri, Mar 29, 2019 at 7:44 PM Nikhil R wrote: > Any help on this would be much appreciated as our prod is down since a day > and each osd restart is

Re: [ceph-users] CEPH OSD Restarts taking too long v10.2.9

2019-03-29 Thread Nikhil R
Any help on this would be much appreciated as our prod is down since a day and each osd restart is taking 4-5 hours. in.linkedin.com/in/nikhilravindra On Fri, Mar 29, 2019 at 7:43 PM Nikhil R wrote: > We have maxed out the files per dir. CEPH is trying to do an online split > due to which

Re: [ceph-users] CEPH OSD Restarts taking too long v10.2.9

2019-03-29 Thread Nikhil R
We have maxed out the files per dir. CEPH is trying to do an online split due to which osd's are crashing. We increased the split_multiple and merge_threshold for now and are restarting osd's. Now on these restarts the leveldb compaction is taking a long time. Below are some of the logs.

[ceph-users] CephFS and many small files

2019-03-29 Thread Clausen , Jörn
Hi! In my ongoing quest to wrap my head around Ceph, I created a CephFS (data and metadata pool with replicated size 3, 128 pgs each). When I mount it on my test client, I see a usable space of ~500 GB, which I guess is okay for the raw capacity of 1.6 TiB I have in my OSDs. I run bonnie

Re: [ceph-users] Blocked ops after change from filestore on HDD to bluestore on SDD

2019-03-29 Thread Uwe Sauter
Hi, Am 28.03.19 um 20:03 schrieb c...@elchaka.de: > Hi Uwe, > > Am 28. Februar 2019 11:02:09 MEZ schrieb Uwe Sauter : >> Am 28.02.19 um 10:42 schrieb Matthew H: >>> Have you made any changes to your ceph.conf? If so, would you mind >> copying them into this thread? >> >> No, I just deleted an

Re: [ceph-users] Bluestore WAL/DB decisions

2019-03-29 Thread Marc Roos
Hi Erik, For now I have everything on the hdd's and I have some pools on just ssd's that require more speed. It looked to me the best way to start simple. I do not seem to need the iops yet to change this setup. However I am curious about what the kind of performance increase you will get

Re: [ceph-users] CEPH OSD Restarts taking too long v10.2.9

2019-03-29 Thread huang jun
Nikhil R 于2019年3月29日周五 下午1:44写道: > > if i comment filestore_split_multiple = 72 filestore_merge_threshold = 480 > in the ceph.conf wont ceph take the default value of 2 and 10 and we would be > in more splits and crashes? > Yes, that aimed to make it clear what results in the long start time,