Hi,
I assume you cluster is deployed by cephadm. I would look at “sudo journalctl
-u ” on each host. You may find the service names using
“systemctl list-units | grep ceph-.*service”.
> 在 2020年10月26日,08:05,Darrin Hodges 写道:
>
> HI all,
>
> Had an issue where the docker containers on all
On 2020-10-25 15:20, Amudhan P wrote:
> Hi,
>
> For my quick understanding How PG's are responsible for allowing space
> allocation to a pool?
>
> My understanding that PG's basically helps in object placement when the
> number of PG's for a OSD's is high there is a high possibility that PG
>
HI all,
Had an issue where the docker containers on all the ceph nodes just seem
to stop at some point, effectively shutting down the cluster. Restarting
cephs on all of the nodes restored the cluster to normal working order.
I would like to find out why this occurred, any ideas on where to look?
Yes, this is the status
# ceph -s
cluster:
id: ab471d92-14a2-11eb-ad67-525400bbdc0d
health: HEALTH_OK
services:
mon: 5 daemons, quorum ceph0.starfleet.sns.it,ceph1,ceph3,ceph5,ceph4
(age 104m)
mgr: ceph1.jxmtpn(active, since 17m), standbys:
ceph0.starfleet.sns.it.clzhjp
Is one of the MGRs up? What is the ceph status?
Zitat von Marco Venuti :
Hi,
I'm experimenting ceph on a (small) test cluster. I'm using version 15.2.5
deployed with cephadm.
I was trying to do some "disaster" testing, such as wiping a disk in order
to simulate a hardware failure, destroy the
Hi,
I'm experimenting ceph on a (small) test cluster. I'm using version 15.2.5
deployed with cephadm.
I was trying to do some "disaster" testing, such as wiping a disk in order
to simulate a hardware failure, destroy the osd and recreate it, all of
which I managed to do successfully.
However, a
I would like to add one comment.
I'm not entirely sure if primary on SSD will actually make the read happen on
SSD. For EC pools there is an option "fast_read"
(https://docs.ceph.com/en/latest/rados/operations/pools/?highlight=fast_read#set-pool-values),
which states that a read will return as
A cache pool might be an alternative, heavily depending on how much data is
hot. However, then you will have much less SSD capacity available, because it
also requires replication.
Looking at the setup that you have only 10*1T =10T SSD, but 20*6T = 120T HDD
you will probably run short of SSD
> 在 2020年10月26日,00:07,Anthony D'Atri 写道:
>
>> I'm not entirely sure if primary on SSD will actually make the read happen
>> on SSD.
>
> My understanding is that by default reads always happen from the lead OSD in
> the acting set. Octopus seems to (finally) have an option to spread the
>
Thanks for the comments.
I also thought about cache tiering. As you said, that also requires
replication, and give us less available space.
As for the HDD capacity, I can create another HDD only pool to store some cold
data. And we are also considering adding more SSD. This is deployment is
> I'm not entirely sure if primary on SSD will actually make the read happen on
> SSD.
My understanding is that by default reads always happen from the lead OSD in
the acting set. Octopus seems to (finally) have an option to spread the reads
around, which IIRC defaults to false.
I’ve never
Hi,
For my quick understanding How PG's are responsible for allowing space
allocation to a pool?
My understanding that PG's basically helps in object placement when the
number of PG's for a OSD's is high there is a high possibility that PG gets
lot more data than other PG's. At this situation,
On 2020-10-25 05:33, Amudhan P wrote:
> Yes, There is a unbalance in PG's assigned to OSD's.
> `ceph osd df` output snip
> ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
> AVAIL %USE VAR PGS STATUS
> 0 hdd 5.45799 1.0 5.5 TiB 3.6 TiB 3.6 TiB 9.7
Yes. This is the limitation of CRUSH algorithm, in my mind. In order to guard
against 2 host failures, I’m going to use 4 replications, 1 on SSD and 3 on
HDD. This will work as intended, right? Because at least I can ensure 3 HDDs
are from different hosts.
> 在 2020年10月25日,20:04,Alexander E.
Hi,
In ceph, when you create an object, it cannot go any OSD as it fits. An object
is mapped to a placement group using a hash algorithm. Then placement groups
are mapped to OSDs. See [1] for details. So, if any of your OSD goes full,
write operations cannot be guaranteed success. Once you
On Sun, Oct 25, 2020 at 12:11 PM huw...@outlook.com wrote:
>
> Hi all,
>
> We are planning for a new pool to store our dataset using CephFS. These data
> are almost read-only (but not guaranteed) and consist of a lot of small
> files. Each node in our cluster has 1 * 1T SSD and 2 * 6T HDD, and
Hi Stefan,
I have started balancer but what I don't understand is there are enough
free space in other disks.
Why it's not showing those in available space?
How to reclaim the free space?
On Sun 25 Oct, 2020, 2:27 PM Stefan Kooman, wrote:
> On 2020-10-25 05:33, Amudhan P wrote:
> > Yes, There
Hi all,
We are planning for a new pool to store our dataset using CephFS. These data
are almost read-only (but not guaranteed) and consist of a lot of small files.
Each node in our cluster has 1 * 1T SSD and 2 * 6T HDD, and we will deploy
about 10 such nodes. We aim at getting the highest read
18 matches
Mail list logo