you can use upgrade to make sure every daemon
is on the same image.
- Adam King
On Fri, Feb 25, 2022 at 10:06 AM Kai Börnert
wrote:
Hi,
what would be the correct way to move forward?
I have a 3 node cephadm installed cluster, one node died, the
other two
are fine and
Hi,
what would be the correct way to move forward?
I have a 3 node cephadm installed cluster, one node died, the other two
are fine and work as expected, so no data loss, but a lot of
remapped/degraded.
The dead node was replaced and I wanted to add it to the cluster using
"ceph orch host
Hi,
to have a fair test you need to replicate the power loss scenarios ceph
does cover and you are currently not:
No memory caches in the os or an the disk are allowed to be used, ceph
has to ensure that an object written is actually written, even if a node
of your cluster explodes right at
Hi,
do you use more nodes than deployed mgrs and cephadm?
If so it might be, that the node you are connecting to no longer has a
instance of the mgr running, and you only getting some leftovers in the
browser cache?
At least this was happening in my test cluster, but I was always able to
Hi,
are any of those old disks SMR ones? Because they will absolutely
destroy any kind of performance (ceph does not use writecaches due to
powerloss concerns, so they kinda do their whole magic for each
writerequest).
Greetings
On 9/6/21 10:47 AM, Simon Sutter wrote:
Hello everyone!
I
As far as i understand, more important factor (for the ssds) is if they
have power loss protections (so they can use their ondevice write cache)
and how many iops they have when using direct writes with queue depth 1
I just did a test for a hdd with block.db on ssd cluster using extra
cheap
If you want to go cheap and somewhat questionable,
there are some asrock mainboards with a soldered in atom cpu, that
support up to 32gb memory (officially only 8, but the controller does
more) and have 2 sata directly + a free 16x pcie port, Those boards are
usually less than 90€, not as
I think the primary goal of a container environments are resource isolation. At
least when I read about the history I never read anything about a tool for
people to skip to learn something.
Containers allow to use mixed version of the same dependency, despite
being a shared dependency, doing
Because all of this reads way to negative regarding containers for me I
wanted to give a different perspective.
Coming from a day to day job, that heavily utilizes kubernetes for its
normal environment, I found cephadm quite like a godsent,
instead of having to deal with a lot of pesky
10:42 schrieb Kai Börnert:
Hi, thanks for your explanation.
I see the 16.2.3 build is published, but (of course) I cannot update to
it, as the version I currently use does still have this bug.
Is there some workaround/hack I can use to upgrade?
I had success with stopping the "looping
missed
one case.)
sage
On Thu, May 6, 2021 at 9:59 AM Kai Börnert wrote:
Hi all,
upon updating to 16.2.2 via cephadm the upgrade is being stuck on the
first mgr
Looking into this via docker logs I see that it is still loading modules
when it is apparently terminated and restarted in a loop
Hi all,
upon updating to 16.2.2 via cephadm the upgrade is being stuck on the
first mgr
Looking into this via docker logs I see that it is still loading modules
when it is apparently terminated and restarted in a loop
When pausing the update, the mgr succeeds to start with the new
Hi,
I'm currently testing some disaster scenarios.
When removing one osd/monitor host, I see that a new quorum is build
without the missing host. The missing host is listed in the dashboard
under Not In Quorum, so probably everything as expected.
After restarting the host, I see that the
Hi,
I hope this mailgroup is ok for this kind of questions, if not please
ignore.
I'm currently in the process of planning a smaller ceph cluster mostly
for cephfs use.
The budget still allows for some SSD's in addition to the required
harddisks.
I see two options on how to use those,
14 matches
Mail list logo