> >>> Package based Ceph deployments, while popular, are not a good choice
> >>> in general. The very simple reason is that it makes upgrades more
> >>> dangerous: you can unintentionally upgrade services in the wrong
> order
> >>> due to failovers.
> >>
> >> Or when a node crashes / reboots during an upgrade.  This has
> happened
> >> to me.
> >>
> >>
> >
> > Hmmm, that is not really nice to read that ceph is so picky that
> everything can go wrong with just a minor update on a single node.
> 
> Then if you really want to stick with package installs, deploy every
> service on a separate node. Your DC runneth over.
> 
> > I am not sure if that is a good direction for development.
> 
> Hence the increasing motivations for container installations, which are
> by nature immune from this dynamic.
> 

I don't know about that, you just move the issue from ceph daemons to container 
daemons. If I remember correctly, I even read something here on the list about 
podman version problems there. 
And since a lot is going still via hosts configs not volumes assigned to the 
task, changes between the host and the task image could also complicate things. 
Didn't I read recently something with lvm.conf or so?


> > I would expect ceph to be more robust.
> 
> It's not a function of Ceph, but robustness is exactly the goal.

So what is then all this nonsense of having a minor upgrade on a node? I have 
never seen any issues with apache httpd, mariadb, postgres etc. Nor with ceph 
for that matter.

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to