[ceph-users] Re: Ceph newbee questions

2023-12-22 Thread Anthony D'Atri
>> >> You can do that for a PoC, but that's a bad idea for any production >> workload. You'd want at least three nodes with OSDs to use the default RF=3 >> replication. You can do RF=2, but at the peril of your mortal data. > > I'm not sure I agree - I think size=2, min_size=2 is no worse

[ceph-users] Re: Ceph newbee questions

2023-12-22 Thread Rich Freeman
Disclaimer: I'm fairly new to Ceph, but I've read a bunch of threads on the min_size=1 issue as that was perplexing me when I started, as one replica is generally considered fine in many other applications. However, there really are some unique concerns to Ceph beyond just the number of disks you

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Johan Hattne
On 2023-12-22 03:28, Robert Sander wrote: Hi, On 22.12.23 11:41, Albert Shih wrote: for n in 1-100    Put off line osd on server n    Uninstall docker on server n    Install podman on server n    redeploy on server n end Yep, that's basically the procedure. But first try it on a test

[ceph-users] Re: Ceph newbee questions

2023-12-22 Thread Anthony D'Atri
> I have manually configured a ceph cluster with ceph fs on debian bookworm. Bookworm support is very, very recent I think. > What is the difference from installing with cephadm compared to manuall > install, > any benefits that you miss with manual install? A manual install is dramatically

[ceph-users] Ceph newbee questions

2023-12-22 Thread Marcus
Hi all, I am all new with ceph and I come from gluster. We have had our eyes on ceph for several years and as the gluster project seems to slow down we now think it is time to start look into ceph. I have manually configured a ceph cluster with ceph fs on debian bookworm. What is the difference

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Anthony D'Atri
> Sorry I thought of one more thing. > > I was actually re-reading the hardware recommendations for Ceph and it seems > to imply that both RAID controllers as well as HBAs are bad ideas. Advice I added most likely ;) "RAID controllers" *are* a subset of HBAs BTW. The nomenclature can be

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Drew Weaver
Sorry I thought of one more thing. I was actually re-reading the hardware recommendations for Ceph and it seems to imply that both RAID controllers as well as HBAs are bad ideas. I believe I remember knowing that RAID controllers are sub optimal but I guess I don't understand how you would

[ceph-users] Re: RGW requests piling up

2023-12-22 Thread Gauvain Pocentek
I'd like to say that it was something smart but it was a bit of luck. I logged in on a hypervisor (we run OSDs and OpenStack hypervisors on the same hosts) to deal with another issue, and while checking the system I noticed that one of the OSDs was using a lot more CPU than the others. It made me

[ceph-users] RGW - user created bucket with name of already created bucket

2023-12-22 Thread Ondřej Kukla
Hello, I would like to share a quite worrying experience I’ve just found on one of my production clusters. User successfully created a bucket with name of a bucket that already exists! He is not bucket owner - the original user is, but he is able to see it when he does ListBuckets over s3

[ceph-users] ceph-dashboard odd behavior when visiting through haproxy

2023-12-22 Thread Demian Romeijn
I'm currently trying to setup a ceph-dashboard using the official documentation on how to do so. I've managed to log-in by just visiting the URL & port, and by visting it through haproxy. However using haproxy to visit the site results in odd behavior. At my first login, nothing loads on

[ceph-users] Re: RGW requests piling up

2023-12-22 Thread Gauvain Pocentek
Hi again, It turns out that our rados cluster wasn't that happy, the rgw index pool wasn't able to handle the load. Scaling the PG number helped (256 to 512), and the RGW is back to a normal behaviour. There is still a huge number of read IOPS on the index, and we'll try to figure out what's

[ceph-users] MDS subtree pinning

2023-12-22 Thread Sake Ceph
Hi! As I'm reading through the documentation about subtree pinning, I was wondering if the following is possible. We've got the following directory structure. / /app1 /app2 /app3 /app4 Can I pin /app1 to MDS rank 0 and 1, the directory /app2 to rank 2 and finally /app3 and /app4 to

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Robert Sander
On 22.12.23 11:46, Marc wrote: Does podman have this still, what dockers has. That if you kill the docker daemon all tasks are killed? Podman does not come with a daemon to start containers. The Ceph orchestrator creates systemd units to start the daemons in podman containers. Regards --

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread André Gemünd
Podman doesn't use a daemon, thats one of the basic ideas. We also use it in production btw. - Am 22. Dez 2023 um 11:46 schrieb Marc m...@f1-outsourcing.eu: >> > >> >> It's been claimed to me that almost nobody uses podman in >> >> production, but I have no empirical data. > > As opposed

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Robert Sander
Hi, On 22.12.23 11:41, Albert Shih wrote: for n in 1-100 Put off line osd on server n Uninstall docker on server n Install podman on server n redeploy on server n end Yep, that's basically the procedure. But first try it on a test cluster. Regards -- Robert Sander Heinlein

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Marc
> > > >> It's been claimed to me that almost nobody uses podman in > >> production, but I have no empirical data. As opposed to docker or to having no containers at all? > > I even converted clusters from Docker to podman while they stayed > > online thanks to "ceph orch redeploy". > > Does

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Eugen Block
That's good to know, I have the same in mind for one of the clusters but didn't have the time to test it yet. Zitat von Robert Sander : On 21.12.23 22:27, Anthony D'Atri wrote: It's been claimed to me that almost nobody uses podman in production, but I have no empirical data. I even

[ceph-users] RGW rate-limiting or anti-hammering for (external) auth requests // Anti-DoS measures

2023-12-22 Thread Christian Rohmann
Hey Ceph-Users, RGW does have options [1] to rate limit ops or bandwidth per bucket or user. But those only come into play when the request is authenticated. I'd like to also protect the authentication subsystem from malicious or invalid requests. So in case e.g. some EC2 credentials are not

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Robert Sander
On 21.12.23 22:27, Anthony D'Atri wrote: It's been claimed to me that almost nobody uses podman in production, but I have no empirical data. I even converted clusters from Docker to podman while they stayed online thanks to "ceph orch redeploy". Regards -- Robert Sander Heinlein