Re: [ceph-users] Beginner questions

2020-01-16 Thread Bastiaan Visser
t; Binghamton universitykdh...@binghamton.edu > 607-760-2328 (Cell) > 607-777-4641 (Office) > > > > On 1/16/2020 4:27 PM, Bastiaan Visser wrote: > > Dave made a good point WAL + DB might end up a little over 60G, I would > probably go with ~70Gig partitions /LV's per OSD in you

Re: [ceph-users] [External Email] RE: Beginner questions

2020-01-16 Thread Bastiaan Visser
etter cluster-wide performance. I can't predict what the relatively low > client counts you're suggesting would impact that. > > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Director – Information Technology > > Perform Air International Inc. >

Re: [ceph-users] Beginner questions

2020-01-16 Thread Bastiaan Visser
I would definitely go for Nautilus. there are quite some optimizations that went in after mimic. Bluestore DB size usually ends up at either 30 or 60 GB. 30 GB is one of the sweet spots during normal operation. But during compaction, ceph writes the new data before removing the old, hence the

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-20 Thread Bastiaan Visser
Another option is the use of bcache / flashcache. I have experimented with bcache, it is quite easy to et up, but once you run into performance problems it is hard to pinpoint the problem. In the end i ended up just adding more disks to share iops, and going for the default setup (db / wal

Re: [ceph-users] ceph threads and performance

2019-06-12 Thread Bastiaan Visser
On both larger and smaller clusters i have never had problems with the default values. So i guess thats a pretty good start. - Original Message - From: "tim taler" To: "Paul Emmerich" Cc: "ceph-users" Sent: Wednesday, June 12, 2019 3:51:43 PM Subject: Re: [ceph-users] ceph threads

[ceph-users] io-schedulers

2018-11-05 Thread Bastiaan Visser
There are lots of rumors around about the benefit of changing io-schedulers for OSD disks. Even some benchmarks can be found, but they are all more than a few years old. Since ceph is moving forward with quite a pace, i am wondering what the common practice is to use as io-scheduler on OSD's.

Re: [ceph-users] Monitor Recovery

2018-10-23 Thread Bastiaan Visser
are you using ceph-deploy? In that case you could do: ceph-deploy mon destroy {host-name [host-name]...} and: ceph-deploy mon create {host-name [host-name]...} te recreate it. - Original Message - From: "John Petrini" To: "ceph-users" Sent: Tuesday, October 23, 2018 8:22:44 PM

Re: [ceph-users] Crushmap and failure domains at rack level (ideally data-center level in the future)

2018-10-23 Thread Bastiaan Visser
Something must be wrong, since you have min_size 3 the pool should go read only once you take out the first rack. Probably even when you take out the first host. What is the outputput of ceph osd pool get min_size ? I guess it will be 2, since you did not hit a problem while taking out one

Re: [ceph-users] Best practices for allocating memory to bluestore cache

2018-08-30 Thread Bastiaan Visser
Your claim that all cache is used for K/V cache is false (with default settings). K/V cache is maximized to 500Meg. : bluestore_cache_kv_max Description:The maximum amount of cache devoted to key/value data (rocksdb). Type: Unsigned Integer Required: Yes Default:512 *

Re: [ceph-users] packages names for ubuntu/debian

2018-08-20 Thread Bastiaan Visser
you should only use the 18.04 repo in 18.04, and remove the 16.04 repo. use: https://download.ceph.com/debian-luminous bionic main - Bastiaan - Original Message - From: "Alfredo Daniel Rezinovsky" To: "ceph-users" Sent: Sunday, August 19, 2018 10:15:00 PM Subject: [ceph-users]

Re: [ceph-users] Ceph-mon MTU question

2018-08-16 Thread Bastiaan Visser
>From experience i can tell you that all mons need to use the same MTU between >eachother. We moved from 1500 to 9000 a while ago and lost quorum while changing the MTU of the mons. Once all mons where at 9000, everything was fine again. Cluster ran fine with 9000 on the OSD's + Clients and

Re: [ceph-users] ceph-mgr dashboard behind reverse proxy

2018-08-09 Thread Bastiaan Visser
This will work: backend ceph01 option httpchk GET / http-check expect status 200 server mgr01 *.*.*.*:7000 check server mgr02 *.*.*.*:7000 check server mgr03 *.*.*.*:7000 check Regards, Bastiaan - Original Message - From: "Marc Schöchlin" To:

Re: [ceph-users] Upgrading journals to BlueStore: a conundrum

2018-08-06 Thread Bastiaan Visser
As long as your fault domain is host (or even rack) you're good, just take out the entire host and recreate all osd's on it. - Original Message - From: "Robert Stanford" To: "ceph-users" Sent: Monday, August 6, 2018 8:39:07 PM Subject: [ceph-users] Upgrading journals to BlueStore: a