[ceph-users] 14. 2.15: Question to collection_list_legacy osd bug fixed in 14.2.15

2020-11-23 Thread Rainer Krienke
Hello, I am running a productive ceph cluster with Nautilus 14.2.13. All OSDs are bluestore and were created with a ceph version prior to 14.2.12. What I would like to know is how urgent I should consider the collection_list_legacy bug since at the moment I am not going to add a brand new OSD

[ceph-users] osd crash: Caught signal (Aborted) thread_name:tp_osd_tp

2020-11-23 Thread Milan Kupcevic
Hello, Three OSD daemons crash at the same time while processing the same object located in an rbd ec4+2 pool leaving a placement group in inactive down state. Soon after I start the osd daemons back up they crash again choking on the same object.

[ceph-users] Cephfs snapshots and previous version

2020-11-23 Thread Oliver Weinmann
Today I played with a samba gateway and cephfs. I couldn’t get previous versions displayed on a windows client and found very little info on the net how to accomplish this. It seems that I need a vfs module called ceph_snapshots. It’s not included in the latest samba version on Centos 8. by

[ceph-users] v14.2.15 Nautilus released

2020-11-23 Thread David Galloway
This is the 15th backport release in the Nautilus series. This release fixes a ceph-volume regression introduced in v14.2.13 and includes few other fixes. We recommend users to update to this release. For a detailed release notes with links & changelog please refer to the official blog entry at

[ceph-users] Re: Documentation of older Ceph version not accessible anymore on docs.ceph.com

2020-11-23 Thread Dan Mick
I don't know the answer to that. On 11/23/2020 6:59 AM, Martin Palma wrote: Hi Dan, yes I noticed but now only "latest", "octopus" and "nautilus" are offered to be viewed. For older versions I had to go directly to github. Also simply switching the URL from

[ceph-users] Re: Unable to find further optimization, or distribution is already perfect

2020-11-23 Thread Nathan Fish
What does "ceph osd pool autoscale-status" report? On Mon, Nov 23, 2020 at 12:59 PM Toby Darling wrote: > > Hi > > We're having problems getting our erasure coded ec82pool to upmap balance. > "ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) > nautilus (stable)": 554 > > The pool

[ceph-users] Unable to find further optimization, or distribution is already perfect

2020-11-23 Thread Toby Darling
Hi We're having problems getting our erasure coded ec82pool to upmap balance. "ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable)": 554 The pool consists of 20 nodes in 10 racks, each rack containing a pair of nodes 1@45*8TB drives and 1@10*16TB.

[ceph-users] Re: PGs undersized for no reason?

2020-11-23 Thread Frank Schilder
Found it. OSDs came up in the wrong root. = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Frank Schilder Sent: 23 November 2020 12:46:32 To: ceph-users@ceph.io Subject: [ceph-users] PGs undersized for no reason? Hi all, I'm

[ceph-users] PGs undersized for no reason?

2020-11-23 Thread Frank Schilder
Hi all, I'm upgrading ceph mimic 13.2.8 to 13.2.10 and make a strange observation. When restarting OSDs on the new version, the PGs come back as undersized. They are missing 1 OSD and I get a lot of objects degraded/misplaced. I have only the noout flag set. Can anyone help me out why the PGs

[ceph-users] Re: ssd suggestion

2020-11-23 Thread Anthony D'Atri
Those are QLC, with low durability. They may work okay for your use case if you keep an eye on lifetime, esp if your writes tend to sequential. Random writes will eat them more quickly, as will of course EC. Remember that recovery and balancing contribute to writes, and ask Micron for the

[ceph-users] Re: Documentation of older Ceph version not accessible anymore on docs.ceph.com

2020-11-23 Thread Martin Palma
Hi Dan, yes I noticed but now only "latest", "octopus" and "nautilus" are offered to be viewed. For older versions I had to go directly to github. Also simply switching the URL from "https://docs.ceph.com/en/nautilus/; to "https://docs.ceph.com/en/luminous/; will not work any more. Is it

[ceph-users] Re: NoSuchKey on key that is visible in s3 list/radosgw bk

2020-11-23 Thread Denis Krienbühl
Thanks Frédéric, we’ve done that in the meantime to work around issue #47866. The error has been reproduced and there’s a PR associated with the issue: https://tracker.ceph.com/issues/47866 Cheers, Denis > On 23 Nov 2020, at 11:56, Frédéric Nass >

[ceph-users] ssd suggestion

2020-11-23 Thread mj
Hi, We are going to replace our spinning SATA 4GB filestore disks with new 4GB SSD bluestore disks. Our cluster is reading far more than writing. Comparing options, I found the interesting and cheap Micron 5210 ION 3,84TB SSDs. The way we understand it, there is a performance hit, when it

[ceph-users] Re: OSD Memory usage

2020-11-23 Thread Igor Fedotov
Hi Seena, just to note  - this ticket might be relevant. https://tracker.ceph.com/issues/48276 Mind leaving a comment there? Thanks, Igor On 11/23/2020 2:51 AM, Seena Fallah wrote: Now one of my OSDs gets segfault. Here is the full trace: https://paste.ubuntu.com/p/4KHcCG9YQx/ On Mon,

[ceph-users] HA_proxy setup

2020-11-23 Thread Szabo, Istvan (Agoda)
Hi, I wonder is there anybody have a setup like I want to setup? 1st subnet: 10.118.170.0/24 (FE users) 2nd subnet: 10.192.150.0/24 (BE users) The users are coming from these subnets, and I want that the FE users will come on the 1st interface on the loadbalancer, the BE users will come one

[ceph-users] Sizing radosgw and monitor

2020-11-23 Thread Szabo, Istvan (Agoda)
Hi, I haven't really find any documentation about how to size radosgw. One redhat doc says we need to decide the ratio like 1:50 or 1:100 osd / rgw. I had an issue earlier where I had a user who source loadbalanced so always went to the same radosgateway and 1 time just maxed out. So the

[ceph-users] Re: NoSuchKey on key that is visible in s3 list/radosgw bk

2020-11-23 Thread Frédéric Nass
Hi Denis, You might want to look at rgw_gc_obj_min_wait from [1] and try increasing the default value of 7200s (2 hours) to whatever suits your need < 2^64. Just remind that at some point you'll have to get these objects processed by the gc. Or manually through the API [2]. One thing that