Hohenzollernstr. 27, 80801 Munich
>
> Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306
>
>
>
> Am Mo., 26. Aug. 2024 um 09:48 Uhr schrieb Janne Johansson <
> icepic...@gmail.com>:
>
>> Den fre 23 aug. 2024 kl 18:30 skrev Phong Tran Thanh <
>&g
r?
>>
>> Thank you,
>> Bogdan Velica
>> croit.io
>>
>> On Fri, Aug 23, 2024 at 12:05 PM Phong Tran Thanh
>> wrote:
>>
>>> Hi Ceph users
>>>
>>> I am designing a CEPH system with 6 servers running full NVMe. Do I need
>
Hi Ceph users
I am designing a CEPH system with 6 servers running full NVMe. Do I need to
use 3 separate servers to run the MON services that communicate with
OpenStack, or should I integrate the MON services into the OSD servers?
What is the recommendation? Thank you.
--
Email: tranphong...@gmai
check mtu between nodes first, ping with mtu size to check it.
Vào Th 2, 17 thg 6, 2024 vào lúc 22:59 Sarunas Burdulis <
saru...@math.dartmouth.edu> đã viết:
> Hi,
>
> 6 host 16 OSD cluster here, all SATA SSDs. All Ceph daemons version
> 18.2.2. Host OS is Ubuntu 24.04. Intel X540 10Gb/s inter
Hi Anthony,
I have 15 nodes 18HDD disk and 6 ssd disk per node
Vào Th 3, 11 thg 6, 2024 vào lúc 10:29 Anthony D'Atri <
anthony.da...@gmail.com> đã viết:
> What specifically are your OSD devices?
>
> On Jun 10, 2024, at 22:23, Phong Tran Thanh
> wrote:
>
> Hi cep
Hi Anthony!
My osd is HDD 12TB 7200 and SSD 960GB for wal/db
Thanks Anthony!
Vào Th 3, 11 thg 6, 2024 vào lúc 10:29 Anthony D'Atri <
anthony.da...@gmail.com> đã viết:
> What specifically are your OSD devices?
>
> On Jun 10, 2024, at 22:23, Phong Tran Thanh
> wrote:
Hi everyone,
I want to know about the scrubbing state placement group, my cluster has
too many pg in state scrubbing and it's increasing over time, maybe
scrubbing takes too long.
[image: image.png]
Cluster is not problem
I'm using reef version
root@n1s1:~# ceph health detail
HEALTH_OK
I want to
~~ 2x the data
> of 3x replication while not going overboard on the performance hit.
>
> If you care about your data, do not set m=1.
>
> If you need to survive the loss of many drives, say if your cluster is
> across multiple buildings or sites, choose a larger value of k. There ar
vice outages during such operations.
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: Anthony D'Atri
> Sent: Saturday, January 13, 2024 5:36 PM
> To: Phong Tr
Hi ceph user!
I need to determine which erasure code values (k and m) to choose for a
Ceph cluster with 10 nodes.
I am using the reef version with rbd. Furthermore, when using a larger k,
for example, ec6+2 and ec4+2, which erasure coding performance is better,
and what are the criteria for choos
Only change it with a custom profile, no with built-in profiles, i am
configuring it from ceph dashboard.
osd_mclock_scheduler_client_wgt=6 -> this is my setting
Vào Th 7, 13 thg 1, 2024 vào lúc 02:19 Anthony D'Atri
đã viết:
>
>
> > On Jan 12, 2024, at 03:31, Phong
--
> Agoda Services Co., Ltd.
> e: istvan.sz...@agoda.com
> ---
>
>
> ------
> *From:* Phong Tran Thanh
> *Sent:* Friday, January 12, 2024 3:32 PM
> *To:* David Yang
> *Cc:* ceph-user
I update the config
osd_mclock_profile=custom
osd_mclock_scheduler_background_recovery_lim=0.2
osd_mclock_scheduler_background_recovery_res=0.2
osd_mclock_scheduler_client_wgt=6
Vào Th 6, 12 thg 1, 2024 vào lúc 15:31 Phong Tran Thanh <
tranphong...@gmail.com> đã viết:
> Hi Yang an
Hi Yang and Anthony,
I found the solution for this problem on a HDD disk 7200rpm
When the cluster recovers, one or multiple disk failures because slowop
appears and then affects the cluster, we can change these configurations
and may reduce IOPS when recovery.
osd_mclock_profile=custom
osd_mclock
Hi community
I'm currently facing a significant issue with my Ceph cluster. I have a
cluster consisting of 10 nodes, and each node is equipped with 6 SSDs of
960GB used for block.db and 18 12TB drives used for data, network bonding
2x10Gbps for public and local networks.
I am using a 4+2 erasure c
Hi community.
I' am running ceph cluster with 10 node and 180 osds, and i create an pool
erasure code 4+2 with 256 PGs, but when create an pool PG too slow, and pg
status stuck peering
EALTH_WARN Reduced data availability: 5 pgs inactive, 5 pgs peering
[WRN] PG_AVAILABILITY: Reduced data availabi
Hi community,
When I list images of rbd in ceph dashboard, Block->Images, list image is
too slow to view, how can i get it faster.
I am using ceph reef version 18.2.1
Thanks to the community.
*Tran Thanh Phong*
Email: tranphong...@gmail.com
Skype: tranphong079
_
Dear Kai Stian Olstad,
Thank you for your information. It's good knowledge for me.
Vào Th 5, 28 thg 12, 2023 vào lúc 15:06 Kai Stian Olstad <
ceph+l...@olstad.com> đã viết:
> On 27.12.2023 04:54, Phong Tran Thanh wrote:
> > Thank you for your knowledge. I have a que
en tis 26 dec. 2023 kl 08:45 skrev Phong Tran Thanh <
> tranphong...@gmail.com>:
> >
> > Hi community,
> >
> > I am running ceph with block rbd with 6 nodes, erasure code 4+2 with
> > min_size of pool is 4.
> >
> > When three osd is down, and an PG
Hi community,
I am running ceph with block rbd with 6 nodes, erasure code 4+2 with
min_size of pool is 4.
When three osd is down, and an PG is state down, some pools is can't write
data, suppose three osd can't start and pg stuck in down state, how i can
delete or recreate pg to replace down pg o
It works!!!
Thanks Kai Stian Olstad
Vào Th 6, 1 thg 12, 2023 vào lúc 17:06 Kai Stian Olstad <
ceph+l...@olstad.com> đã viết:
> On Fri, Dec 01, 2023 at 04:33:20PM +0700, Phong Tran Thanh wrote:
> >I have a problem with my osd, i want to show dump_historic_ops of osd
> &
Hi community,
I have a problem with my osd, i want to show dump_historic_ops of osd
I follow the guide:
https://www.ibm.com/docs/en/storage-fusion/2.6?topic=alerts-cephosdslowops
But when i run command
ceph daemon osd.8 dump_historic_ops show the error, the command run on node
with osd.8
Can't ge
22 matches
Mail list logo