I have already installed multiple one node ceph cluster with cephfs for
non-productive workloads in the last few years.
Had no major issue, e.g. once a broken HDD. The question is what kind of EC
or replication you will use. Also only powered off the node in a clean and
healthy state ;-)
What woul
On Tue, May 21, 2024 at 08:54:26PM +, Eugen Block wrote:
> It’s usually no problem to shut down a cluster. Set at least the noout flag,
> the other flags like norebalance, nobackfill etc won’t hurt either. Then
> shut down the servers. I do that all the time with test clusters (they do
> have d
It’s usually no problem to shut down a cluster. Set at least the noout
flag, the other flags like norebalance, nobackfill etc won’t hurt
either. Then shut down the servers. I do that all the time with test
clusters (they do have data, just not important at all), and I’ve
never had data loss
Thanks guys,
I think ill just risk it cause it's just for backup, then write
something up later as a follow up on what happens in-case others want to
do similar. I agree it not typical, im a bit of an odd-duck datahorder.
Regards,
Adam
On 5/21/24 14:21, Matt Vandermeulen wrote:
I would norm
I would normally vouch for ZFS for this sort of thing, but the mix of
drive sizes will be... and inconvenience, at best. You could get
creative with the hierarchy (making zraid{2,3} of mirrors of same-sized
drives, or something), but it would be far from ideal. I use ZFS for my
own home machine
> It's all non-corperate data, I'm just trying to cut back on wattage
> (removes around 450W of the 2.4KW) by powering down backup servers that
450W for one server seems quite hefty. Under full load? You can also check your
cpu power states and frequency that cuts also some power.
>
> So that
Hello,
It's all non-corperate data, I'm just trying to cut back on wattage
(removes around 450W of the 2.4KW) by powering down backup servers that
house 208TB while not being backed up or restoring.
ZFS sounds interesting however does it play nice with a mix of drive
sizes? That's primarily
> > I think it is his lab so maybe it is a test setup for production.
>
> Home production?
A home setup to test on, before he applies changes to his production
Saluti 🍷 ;)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
> I think it is his lab so maybe it is a test setup for production.
Home production?
>
> I don't think it matters to much with scrubbing, it is not like it is related
> to how long you were offline. It will scrub just as much being 1 month
> offline as being 6 months offline.
>
>>
>> If y
I think it is his lab so maybe it is a test setup for production.
I don't think it matters to much with scrubbing, it is not like it is related
to how long you were offline. It will scrub just as much being 1 month offline
as being 6 months offline.
>
> If you have a single node arguably ZFS wo
If you have a single node arguably ZFS would be a better choice.
> On May 21, 2024, at 14:53, adam.ther wrote:
>
> Hello,
>
> To save on power in my home lab can I have a single node CEPH cluster sit
> idle and powered off for 3 months at a time then boot only to refresh
> backups? Or will th
11 matches
Mail list logo