t; Binghamton universitykdh...@binghamton.edu
> 607-760-2328 (Cell)
> 607-777-4641 (Office)
>
>
>
> On 1/16/2020 4:27 PM, Bastiaan Visser wrote:
>
> Dave made a good point WAL + DB might end up a little over 60G, I would
> probably go with ~70Gig partitions /LV's per OSD in you
etter cluster-wide performance. I can't predict what the relatively low
> client counts you're suggesting would impact that.
> >
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director – Information Technology
> > Perform Air International Inc.
>
I would definitely go for Nautilus. there are quite some optimizations that
went in after mimic.
Bluestore DB size usually ends up at either 30 or 60 GB.
30 GB is one of the sweet spots during normal operation. But during
compaction, ceph writes the new data before removing the old, hence the
Another option is the use of bcache / flashcache.
I have experimented with bcache, it is quite easy to et up, but once you run
into performance problems it is hard to pinpoint the problem.
In the end i ended up just adding more disks to share iops, and going for the
default setup (db / wal
On both larger and smaller clusters i have never had problems with the default
values.
So i guess thats a pretty good start.
- Original Message -
From: "tim taler"
To: "Paul Emmerich"
Cc: "ceph-users"
Sent: Wednesday, June 12, 2019 3:51:43 PM
Subject: Re: [ceph-users] ceph threads
There are lots of rumors around about the benefit of changing io-schedulers for
OSD disks.
Even some benchmarks can be found, but they are all more than a few years old.
Since ceph is moving forward with quite a pace, i am wondering what the common
practice is to use as io-scheduler on OSD's.
are you using ceph-deploy?
In that case you could do:
ceph-deploy mon destroy {host-name [host-name]...}
and:
ceph-deploy mon create {host-name [host-name]...}
te recreate it.
- Original Message -
From: "John Petrini"
To: "ceph-users"
Sent: Tuesday, October 23, 2018 8:22:44 PM
Something must be wrong, since you have min_size 3 the pool should go read only
once you take out the first rack. Probably even when you take out the first
host.
What is the outputput of ceph osd pool get min_size ?
I guess it will be 2, since you did not hit a problem while taking out one
Your claim that all cache is used for K/V cache is false (with default
settings).
K/V cache is maximized to 500Meg. :
bluestore_cache_kv_max
Description:The maximum amount of cache devoted to key/value data
(rocksdb).
Type: Unsigned Integer
Required: Yes
Default:512 *
you should only use the 18.04 repo in 18.04, and remove the 16.04 repo.
use:
https://download.ceph.com/debian-luminous bionic main
- Bastiaan
- Original Message -
From: "Alfredo Daniel Rezinovsky"
To: "ceph-users"
Sent: Sunday, August 19, 2018 10:15:00 PM
Subject: [ceph-users]
>From experience i can tell you that all mons need to use the same MTU between
>eachother.
We moved from 1500 to 9000 a while ago and lost quorum while changing the MTU
of the mons. Once all mons where at 9000, everything was fine again.
Cluster ran fine with 9000 on the OSD's + Clients and
This will work:
backend ceph01
option httpchk GET /
http-check expect status 200
server mgr01 *.*.*.*:7000 check
server mgr02 *.*.*.*:7000 check
server mgr03 *.*.*.*:7000 check
Regards,
Bastiaan
- Original Message -
From: "Marc Schöchlin"
To:
As long as your fault domain is host (or even rack) you're good, just take out
the entire host and recreate all osd's on it.
- Original Message -
From: "Robert Stanford"
To: "ceph-users"
Sent: Monday, August 6, 2018 8:39:07 PM
Subject: [ceph-users] Upgrading journals to BlueStore: a
13 matches
Mail list logo