For read-only workload, this should make no difference, since all read are from
SSD normally. But I think it’s still beneficial to writing, backfilling,
recovering. And also I will have some HDD only pools, so WAL/DB on SSD will
definitely improve performance for these pools. I will always put
在 2020年11月10日,02:26,Dave Hall 写道:
This thread caught my attention. I have a smaller cluster with a lot of OSDs
sharing the same SSD on each OSD node. I mentioned in an earlier post that I
found a statement in
This thread caught my attention. I have a smaller cluster with a lot of
OSDs sharing the same SSD on each OSD node. I mentioned in an earlier post
that I found a statement in
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/
indicating that if the SSD/NVMe in a node is
Sorry for confusing, what I meant to say is that "having all WAL/DB
on one SSD will result a single point of failure". If that SSD goes
down, all OSDs depending on it will also stop working.
What I'd like to confirm is that, there is no benefit to put WAL/DB
on SSD when there is either cache tire
> 在 2020年11月8日,11:30,Tony Liu 写道:
>
> Is it FileStore or BlueStore? With this SSD-HDD solution, is journal
> or WAL/DB on SSD or HDD? My understanding is that, there is no
> benefit to put journal or WAL/DB on SSD with such solution. It will
> also eliminate the single point of failure when
Is it FileStore or BlueStore? With this SSD-HDD solution, is journal
or WAL/DB on SSD or HDD? My understanding is that, there is no
benefit to put journal or WAL/DB on SSD with such solution. It will
also eliminate the single point of failure when having all WAL/DB
on one SSD. Just want to
Thanks for digging this out. I believed to remember exactly this method (don't
know where from), but couldn't find it in the documentation and started
doubting it. Yes, this would be very useful information to add to the
documentation and it also confirms that your simpler setup with just a
Thank you for sharing your experience. Glad to hear that someone has already
used this strategy and it works well.
> 在 2020年10月27日,03:10,Reed Dier 写道:
>
> Late reply, but I have been using what I refer to as a "hybrid" crush
> topology for some data for a while now.
>
> Initially with just
Late reply, but I have been using what I refer to as a "hybrid" crush topology
for some data for a while now.
Initially with just rados objects, and later with RBD.
We found that we were able to accelerate reads to roughly all-ssd performance
levels, while bringing up the tail end of the write
Please share benchmark data if you test this out. I am sure many would
be interested.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
> 在 2020年10月26日,15:43,Frank Schilder 写道:
>
>
>> I’ve never seen anything that implies that lead OSDs within an acting set
>> are a function of CRUSH rule ordering.
>
> This is actually a good question. I believed that I had seen/heard that
> somewhere, but I might be wrong.
>
> Looking at
> I’ve never seen anything that implies that lead OSDs within an acting set are
> a function of CRUSH rule ordering.
This is actually a good question. I believed that I had seen/heard that
somewhere, but I might be wrong.
Looking at the definition of a PG, is states that a PG is an ordered set
I would like to add one comment.
I'm not entirely sure if primary on SSD will actually make the read happen on
SSD. For EC pools there is an option "fast_read"
(https://docs.ceph.com/en/latest/rados/operations/pools/?highlight=fast_read#set-pool-values),
which states that a read will return as
A cache pool might be an alternative, heavily depending on how much data is
hot. However, then you will have much less SSD capacity available, because it
also requires replication.
Looking at the setup that you have only 10*1T =10T SSD, but 20*6T = 120T HDD
you will probably run short of SSD
> 在 2020年10月26日,00:07,Anthony D'Atri 写道:
>
>> I'm not entirely sure if primary on SSD will actually make the read happen
>> on SSD.
>
> My understanding is that by default reads always happen from the lead OSD in
> the acting set. Octopus seems to (finally) have an option to spread the
>
Thanks for the comments.
I also thought about cache tiering. As you said, that also requires
replication, and give us less available space.
As for the HDD capacity, I can create another HDD only pool to store some cold
data. And we are also considering adding more SSD. This is deployment is
> I'm not entirely sure if primary on SSD will actually make the read happen on
> SSD.
My understanding is that by default reads always happen from the lead OSD in
the acting set. Octopus seems to (finally) have an option to spread the reads
around, which IIRC defaults to false.
I’ve never
Yes. This is the limitation of CRUSH algorithm, in my mind. In order to guard
against 2 host failures, I’m going to use 4 replications, 1 on SSD and 3 on
HDD. This will work as intended, right? Because at least I can ensure 3 HDDs
are from different hosts.
> 在 2020年10月25日,20:04,Alexander E.
On Sun, Oct 25, 2020 at 12:11 PM huw...@outlook.com wrote:
>
> Hi all,
>
> We are planning for a new pool to store our dataset using CephFS. These data
> are almost read-only (but not guaranteed) and consist of a lot of small
> files. Each node in our cluster has 1 * 1T SSD and 2 * 6T HDD, and
19 matches
Mail list logo