On  2025-07-09  23:12, Özkan Göksu wrote:
> Hello Burkhard.
> 
> Yes you are right indeed.
> 
> Currently I'm using an Archlinux based custom development OS Nautilus.
> For reef, I have to develop a new custom OS from scratch and revisit all of
> my dependencies.
> I had too many issues on my custom Archlinux OS development before and had
> to spend too much time fixing the issues. I won't do that again.
> 
> I decided to use Ubuntu 22.04 with Reef. I've done customizations and
> tunings based on my dependencies, hardware and use case.

Are there specific reasons you are targetting Reef? It will EOL shortly

I would recommend Squid (and Ubuntu 24.04).

Or, if you have to stay on Ubuntu 22.04, go with Quincy (Quincy will be 
supported by Canonical as its part of the 22.04 LTS)

cheers,
peter.

> Ceph OS recommendations:
> https://docs.ceph.com/en/reef/start/os-recommendations/
> " A: Ceph provides packages and has done comprehensive tests on the
> software in them."
> 
> - Best.
> 
> 
> Burkhard Linke <burkhard.li...@computational.bio.uni-giessen.de>, 9 Tem
> 2025 Çar, 23:42 tarihinde şunu yazdı:
> 
>> Hi,
>>
>> On 09.07.25 21:21, Özkan Göksu wrote:
>>> Hello Wesley.
>>>
>>> Thank you for the warning. I'm aware of this and even with the
>> recommended
>>> upgrade path it is not easy or safe for complicated clusters like mine. I
>>> have billions of small s3 objects, versions, indexes etc.
>>> With each new Ceph release, the rados, db, osd, pg projects started to
>> use
>>> new schemas, attributes etc to store the data and index.
>>>
>>> Instead of upgrading, I'm gonna export all the data as RAW, destroy the
>>> clusters, wipe the drives and start from scratch.
>>>
>>> This is the safe and fast upgrade method for me.
>>
>> one additional blocker is the choice of the OS on the machines. Certain
>> ceph releases might only be available for certain distribution versions,
>> so if you decide to actually upgrade the existing cluster you will
>> probably also have to upgrade the OS.
>>
>> Starting from scratch might be the faster solution, if you can afford to
>> dump all data to a temporary location.
>>
>> Best regards,
>>
>> Burkhard
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to