[ceph-users] Re: recommendation for barebones server with 8-12 direct attach NVMe?

2024-01-13 Thread Mike O'Connor
On 14/1/2024 1:57 pm, Anthony D'Atri wrote: The OP is asking about new servers I think. I was looking his statement below relating to using hardware laying around, just putting out there some options which worked for use. So we were going to replace a Ceph cluster with some hardware we had

[ceph-users] Re: recommendation for barebones server with 8-12 direct attach NVMe?

2024-01-13 Thread Mike O'Connor
, E1.s, or E3.s ? On Jan 13, 2024, at 5:10 AM, Mike O'Connor wrote: On 13/1/2024 1:02 am, Drew Weaver wrote: Hello, So we were going to replace a Ceph cluster with some hardware we had laying around using SATA HBAs but I was told that the only right way to build Ceph in 2023 is with direct

[ceph-users] Re: recommendation for barebones server with 8-12 direct attach NVMe?

2024-01-13 Thread Mike O'Connor
On 13/1/2024 1:02 am, Drew Weaver wrote: Hello, So we were going to replace a Ceph cluster with some hardware we had laying around using SATA HBAs but I was told that the only right way to build Ceph in 2023 is with direct attach NVMe. Does anyone have any recommendation for a 1U barebones

[ceph-users] Re: CEPH Cluster Backup - Options on my solution

2019-08-17 Thread Mike O'Connor
> [SNIP script] > > Hi mike > > When looking for backup solutions, did you come across benji [1][2] > and the orginal backy2 [3][4] solutions ? > I have been running benji for a while now, and it seems solid. I use a > second cluster as storage, but it does support S3 and encryption as well. > >

[ceph-users] CEPH Cluster Backup - Options on my solution

2019-08-16 Thread Mike O'Connor
I tested all had a limit of 32bits when printing out the size in bytes Cheers Mike #!/bin/bash ### # Original Author: Rhian Resnick - Updated a lot by Mike O'Connor # Purpose: Backup CEPH RBD using snapshots, the files that are created should be stored off the ceph cluster, but you can use ceph

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread Mike O'Connor
This probably muddies the water. Note Active cluster with around 22 read/write IOPS and 200kB read/write A CephFS mounted with 3 hosts 6 osd per host with 8G public and 10G private networking for Ceph. No SSDs and mostly WD Red 1T 2.5" drives some are HGST 1T 7200. root@blade7:~# fio