On Thu, Mar 21, 2024 at 11:20:44AM +0100, Malte Stroem wrote:
> Hello Robin,
>
> thanks a lot.
>
> Yes, I set debug to debug_rgw=20 & debug_ms=1.
>
> It's that 403 I always get.
>
> There is no versioning enabled.
>
> There is a lifecycle policy for removing the files after one day.
Did the
Hi,
Today we decided to upgrade from 18.2.0 to 18.2.2. No real hope of a
direct impact (nothing in the change log related to something similar)
but at least all daemons were restarted so we thought that may be this
will clear the problem at least temporarily. Unfortunately it has not
been
I think this is fantastic. Looking forward to the sambaxp talk too!
CephFS + SMB is something we make use of very much of, and have had a lot of
success working with. It is nice to see it getting some more integration.
Regards,
Bailey
> -Original Message-
> From: John Mulligan
>
Hello Ceph List,
I'd like to formally let the wider community know of some work I've been
involved with for a while now: adding Managed SMB Protocol Support to Ceph.
SMB being the well known network file protocol native to Windows systems and
supported by MacOS (and Linux). The other key word
On Thursday, March 21, 2024 11:43:19 AM EDT Daniel Brown wrote:
> Assuming I need admin approval to report this on tracker, how long does it
> take to get approved?? Signed up a couple days ago, but still seeing “Your
> account was created and is now pending administrator approval.”
That's
Assuming I need admin approval to report this on tracker, how long does it take
to get approved?? Signed up a couple days ago, but still seeing “Your account
was created and is now pending administrator approval.”
> On Mar 19, 2024, at 7:51 AM, John Mulligan
> wrote:
>
> On Tuesday,
This is not much to work on, to be honest. Have you tried any of the
suggested debugging steps and checked existing threads?
Zitat von faicker mo :
Hi, this is the debug log,
2024-03-13T11:14:28.087+0800 7f6984a95640 4 mon.memb4@3(probing) e6
probe_timeout 0x5650c2b0c3a0
Hi,
before getting into that the first thing I would do is to fail the
mgr. There have been too many issues where failing over the mgr
resolved many of them.
If that doesn't help, the cephadm.log should show something useful
(/var/log/ceph/cephadm.log on the OSD hosts, I'm still not too
>
> Hi,
>
> On 3/21/24 14:50, Michael Worsham wrote:
> >
> > Now that Reef v18.2.2 has come out, is there a set of instructions on
> how to upgrade to the latest version via using Cephadm?
>
> Yes, there is: https://docs.ceph.com/en/reef/cephadm/upgrade/
>
Just a note on that docs section, it
Hi,
On 3/21/24 14:50, Michael Worsham wrote:
Now that Reef v18.2.2 has come out, is there a set of instructions on how to
upgrade to the latest version via using Cephadm?
Yes, there is: https://docs.ceph.com/en/reef/cephadm/upgrade/
Regards
--
Robert Sander
Heinlein Consulting GmbH
I originally used Cephadm to build my sandbox Ceph cluster (Reef v18.2.1) using
Cephadm and Ansible. It's stable and works fine.
Now that Reef v18.2.2 has come out, is there a set of instructions on how to
upgrade to the latest version via using Cephadm?
-- Michael
This message and its
Hi,
i have the same issues.
Deep scrub havent finished the jobs on some PGs.
Using ceph 18.2.2.
Initial installed version was 18.0.0
In the logs i see a lot of scrub/deep-scrub starts
Mar 21 14:21:09 ceph-node10 ceph-osd[3804193]: log_channel(cluster) log
[DBG] : 13.b deep-scrubstarts
Mar
Hello Robin,
thanks a lot.
Yes, I set debug to debug_rgw=20 & debug_ms=1.
It's that 403 I always get.
There is no versioning enabled.
There is a lifecycle policy for removing the files after one day.
That's all I can find.
Do you have any more ideas?
Best,
Malte
On 19.03.24 17:23, Robin
Hi Robert,
One of the theoretically possible (but not implemented in Ceph)
benefits of not crashing would be that an OSD could request the
errored piece of data from other OSDs and rewrite the data on the disk
in place. When a defective sector is rewritten, most disks and SSDs
mark the original
14 matches
Mail list logo