On Tue, 2 Oct 2018, jes...@krogh.cc wrote:
> Hi.
>
> Based on some recommendations we have setup our CephFS installation using
> bluestore*. We're trying to get a strong replacement for "huge" xfs+NFS
> server - 100TB-ish size.
>
> Current setup is - a sizeable Linux host with 512GB of memory -
On 03.10.2018 20:10, jes...@krogh.cc wrote:
Your use case sounds it might profit from the rados cache tier
feature. It's a rarely used feature because it only works in very
specific circumstances. But your scenario sounds like it might work.
Definitely worth giving it a try. Also, dm-cache with
Am Mi., 3. Okt. 2018 um 20:10 Uhr schrieb :
> They are ordered and will hopefully arrive very soon.
>
> Can I:
> 1) Add disks
> 2) Create pool
> 3) stop all MDS's
> 4) rados cppool
> 5) Start MDS
>
> .. Yes, thats a cluster-down on CephFS but shouldn't take long. Or is
> there a better guide?
you
> Your use case sounds it might profit from the rados cache tier
> feature. It's a rarely used feature because it only works in very
> specific circumstances. But your scenario sounds like it might work.
> Definitely worth giving it a try. Also, dm-cache with LVM *might*
> help.
> But if your
I would never ever start a new cluster with Filestore nowadays. Sure,
there are a few minor issues with Bluestore like that it currently
requires some manual configuration for the cache. But overall,
Bluestore is so much better.
Your use case sounds it might profit from the rados cache tier
On Tue, Oct 2, 2018 at 6:28 PM wrote:
>
> Hi.
>
> Based on some recommendations we have setup our CephFS installation using
> bluestore*. We're trying to get a strong replacement for "huge" xfs+NFS
> server - 100TB-ish size.
>
> Current setup is - a sizeable Linux host with 512GB of memory - one
Hello,
this has crept up before, find my thread
"Bluestore caching, flawed by design?" for starters, if you haven't
already.
I'll have to build a new Ceph cluster next year and am also less than
impressed with the choices at this time:
1. Bluestore is the new shiny, filestore is going to die
On 02.10.2018 21:21, jes...@krogh.cc wrote:
On 02.10.2018 19:28, jes...@krogh.cc wrote:
In the cephfs world there is no central server that hold the cache. each
cephfs client reads data directly from the osd's.
I can accept this argument, but nevertheless .. if I used Filestore - it
would work.
> On 02.10.2018 19:28, jes...@krogh.cc wrote:
> In the cephfs world there is no central server that hold the cache. each
> cephfs client reads data directly from the osd's.
I can accept this argument, but nevertheless .. if I used Filestore - it
would work.
> This also means no
> single point of
On 02.10.2018 19:28, jes...@krogh.cc wrote:
Hi.
Based on some recommendations we have setup our CephFS installation using
bluestore*. We're trying to get a strong replacement for "huge" xfs+NFS
server - 100TB-ish size.
Current setup is - a sizeable Linux host with 512GB of memory - one large
Hi.
Based on some recommendations we have setup our CephFS installation using
bluestore*. We're trying to get a strong replacement for "huge" xfs+NFS
server - 100TB-ish size.
Current setup is - a sizeable Linux host with 512GB of memory - one large
Dell MD1200 or MD1220 - 100TB + a Linux kernel
11 matches
Mail list logo