, Sascha.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
On Wed, 8 Nov 2023, Sascha Lucas wrote:
On Tue, 7 Nov 2023, Harry G Coin wrote:
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 482, in
is_partition
/usr/bin/docker: stderr return self.blkid_api['TYPE'] == 'part'
/usr/bin/docker: stderr KeyEr
c7a15e" PTTYPE="dos"
Maybe this indicates why the key is missing?
Please tell me if there is anything I can do to find the root cause.
Thanks, Sascha.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Venky,
On Wed, 14 Dec 2022, Venky Shankar wrote:
On Tue, Dec 13, 2022 at 6:43 PM Sascha Lucas wrote:
Just an update: "scrub / recursive,repair" does not uncover additional
errors. But also does not fix the single dirfrag error.
File system scrub does not clear entries from
Hi William,
On Mon, 12 Dec 2022, William Edwards wrote:
Op 12 dec. 2022 om 22:47 heeft Sascha Lucas het volgende
geschreven:
Ceph "servers" like MONs, OSDs, MDSs etc. are all
17.2.5/cephadm/podman. The filesystem kernel clients are co-located on
the same hosts running th
Hi,
On Mon, 12 Dec 2022, Sascha Lucas wrote:
On Mon, 12 Dec 2022, Gregory Farnum wrote:
Yes, we’d very much like to understand this. What versions of the server
and kernel client are you using? What platform stack — I see it looks like
you are using CephFS through the volumes interface
Hi Greg,
On Mon, 12 Dec 2022, Gregory Farnum wrote:
On Mon, Dec 12, 2022 at 12:10 PM Sascha Lucas wrote:
A follow-up of [2] also mentioned having random meta-data corruption: "We
have 4 clusters (all running same version) and have experienced meta-data
corruption on the majority of
ugs. The later would be
worth fixing. Is there a way to find the root cause?
And is going through [1] relay the only option? It sounds being offline
for days...
At least I know now, what dirfrags[4] are.
Thanks, Sascha.
[1]
https://docs.ceph.com/en/latest/cephfs/disaster-recovery-expert
ction?
Can the file-system stay mounted/used by clients? How long will it take
for 340T? What is a dir_frag damage?
TIA, Sascha.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Christian,
Am 01.03.2022 um 09:01 schrieb Christian Rohmann:
On 28/02/2022 20:54, Sascha Vogt wrote:
Is there a way to clear the error counter on pacific? If so, how?
No, no anymore. See https://tracker.ceph.com/issues/54182
Thanks for the link. Restarting the OSD seems to clear
command
found" response.
Is there a way to clear the error counter on pacific? If so, how?
Greetings
-Sascha-
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
all your pools running replica > 1?
also having 4 monitors is pretty bad for split brain situations..
Zach Heise (SSCC) schrieb am Mi., 9. Feb. 2022, 22:02:
> Hello,
>
> ceph health detail says my 5-node cluster is healthy, yet when I ran
> ceph orch upgrade start --ceph-version 16.2.7
Hey Marc,
Some more information went to the "Ceph Performance very bad even in
Memory?!" topic.
Greetings
On Mon, Feb 7, 2022 at 11:48 AM Marc wrote:
>
> >
> > I gave up on this topic.. ceph does not properly support it. Even though
> > it
> > seems really promising.
> >
> > Tested a ping on
> I'm also going to try rdma mode now, but haven't found any more info.
>
> sascha a. 于2022年2月1日周二 20:31写道:
>
>> Hey,
>>
>> I Recently found this RDMA feature of ceph. Which I'm currently trying
>> out.
>>
>> #rdma dev
>> 0: mlx4_0: node_type ca
Hey,
I Recently found this RDMA feature of ceph. Which I'm currently trying out.
#rdma dev
0: mlx4_0: node_type ca fw 2.42.5000 node_guid 0010:e000:0189:1984
sys_image_guid 0010:e000:0189:1987
rdma_server and rdma_ping works as well as "udaddy".
Stopped one of my osds, added following lines to
t; them don't provide the durability and consistency guarantees you'd
> expect under a lot of failure scenarios.
> -Greg
>
>
> On Sat, Jan 29, 2022 at 8:42 PM sascha a. wrote:
> >
> > Hello,
> >
> > Im currently in progress of setting up a production ceph c
Hey,
SDS is not just about performance. You want something reliable for the next
> 10(?) years, the more data you have the more this is going to be an issue.
> For me it is important that organisations like CERN and NASA are using it.
> If you look at this incident with the 'bug of the year' then
Hey Vitalif,
I found your wiki as well as your own software before. Pretty impressive
and I love your work!
I especially like your "Theoretical Maximum Random Access Performance"
-Section.
That is exactly what I would expect about cephs performance as well (which
is by design very close to your
rking on optimizing this osd code.
>
I also saw that they are working on seastar.. and on top i saw benchmarks
from this performing as worse as bluestore.
Sadly exactly what i was expecting.
The only way to get this tuned is to invest plenty of more time into it..
On Sun, Jan 30, 2022
Hello Marc,
Thanks for your response. Wrote this email early in the morning, spending
the whole night and the last two weeks on benchmarking ceph.
The main reason im spending days on it, is that i have poor performance
with about 25 nvme disks and i went a long long road with hunderts of
Hello,
Im currently in progress of setting up a production ceph cluster on a 40
gbit network (for sure 40gb internal and public network).
Did a lot of machine/linux tweeking already:
- cpupower state disable
- lowlatency kernel
- kernel tweekings
- rx buffer optimize
- affinity mappings
-
21 matches
Mail list logo