On 14/1/2024 1:57 pm, Anthony D'Atri wrote:
The OP is asking about new servers I think.
I was looking his statement below relating to using hardware laying
around, just putting out there some options which worked for use.
So we were going to replace a Ceph cluster with some hardware we had
, E1.s, or E3.s ?
On Jan 13, 2024, at 5:10 AM, Mike O'Connor wrote:
On 13/1/2024 1:02 am, Drew Weaver wrote:
Hello,
So we were going to replace a Ceph cluster with some hardware we had laying
around using SATA HBAs but I was told that the only right way to build Ceph in
2023 is with direct
On 13/1/2024 1:02 am, Drew Weaver wrote:
Hello,
So we were going to replace a Ceph cluster with some hardware we had laying
around using SATA HBAs but I was told that the only right way to build Ceph in
2023 is with direct attach NVMe.
Does anyone have any recommendation for a 1U barebones
> [SNIP script]
>
> Hi mike
>
> When looking for backup solutions, did you come across benji [1][2]
> and the orginal backy2 [3][4] solutions ?
> I have been running benji for a while now, and it seems solid. I use a
> second cluster as storage, but it does support S3 and encryption as well.
>
>
I tested all had
a limit of 32bits when printing out the size in bytes
Cheers
Mike
#!/bin/bash
###
# Original Author: Rhian Resnick - Updated a lot by Mike O'Connor
# Purpose: Backup CEPH RBD using snapshots, the files that are created
should be stored off the ceph cluster, but you can use ceph
This probably muddies the water. Note Active cluster with around 22
read/write IOPS and 200kB read/write
A CephFS mounted with 3 hosts 6 osd per host with 8G public and 10G
private networking for Ceph.
No SSDs and mostly WD Red 1T 2.5" drives some are HGST 1T 7200.
root@blade7:~# fio