[ceph-users] Re: Ceph monitor won't start after Ubuntu update

2021-06-16 Thread Konstantin Shalygin
You can just use your mondb data in another (bionic?) Ubuntu to make a upgrade. Then return upgraded data to your focal Ubuntu. Cheers, k Sent from my iPhone > On 16 Jun 2021, at 19:39, Petr wrote: > > I would like to, but there is no Nautilus packages for Ubuntu 20(focal). _

[ceph-users] Re: Likely date for Pacific backport for RGW fix?

2021-06-16 Thread Konstantin Shalygin
Hi, Cory was mark the backport PR as ready, after QA fix should be merged k Sent from my iPhone > On 16 Jun 2021, at 18:05, Chris Palmer wrote: > > The only thing that has bitten us is https://tracker.ceph.com/issues/50556 > which prevents a multipart

[ceph-users] osd_scrub_max_preemptions for large OSDs or large EC pgs

2021-06-16 Thread Dave Hall
Hello, I would like to ask about osd_scrub_max_preemptions in 14.2.20 for large OSDs (mine are 12TB) and/or large k+m EC pools (mine are 8+2). I did search the archives for this list, but I did not see any reference. Symptoms: I have been seeing a behavior in my cluster over the past 2 or 3 wee

[ceph-users] Strange (incorrect?) upmap entries in OSD map

2021-06-16 Thread Andras Pataki
I've been working on some improvements to our large cluster's space balancing, when I noticed that sometimes the OSD maps have strange upmap entries.  Here is an example on a clean cluster (PGs are active+clean):     {     "pgid": "1.1cb7", ...     "up": [     89

[ceph-users] Re: JSON output schema

2021-06-16 Thread Vladimir Prokofev
This is a great start, thank you! Basically I can look through the code to get the keys I need. But maybe I'm approaching this task wrong? Maybe there's already some better solution to monitor cluster health details? ср, 16 июн. 2021 г. в 02:47, Anthony D'Atri : > Before Luminous, mon clock skew

[ceph-users] Re: ceph osd df return null

2021-06-16 Thread Konstantin Shalygin
Perhaps this OSD's offline / out? Please, upload your `ceph osd df tree` & `ceph osd tree` to pastebin Thanks, k > On 16 Jun 2021, at 10:43, julien lenseigne > wrote: > > when i do ceph osd df, > > some osd returns null size. For example : > > 0 hdd 7.27699 1.0 0B 0B

[ceph-users] Re: Ceph monitor won't start after Ubuntu update

2021-06-16 Thread Petr
Hello Konstantin, Wednesday, June 16, 2021, 1:50:55 PM, you wrote: > Hi, >> On 16 Jun 2021, at 01:33, Petr wrote: >> >> I've upgraded my Ubuntu server from 18.04.5 LTS to Ubuntu 20.04.2 LTS via >> 'do-release-upgrade', >> during that process ceph packages were upgraded from Luminous to Octopu

[ceph-users] Drop old SDD / HDD Host crushmap rules

2021-06-16 Thread Denny Fuchs
Hello, i have from the beginning on one DC very old crush map rules, to split HDD and SSD disks. It is obsolete since Luminous and I want to drop them: # ceph osd crush rule ls replicated_rule fc-r02-ssdpool fc-r02-satapool fc-r02-ssd = [ { "rule_id": 0, "rul

[ceph-users] Likely date for Pacific backport for RGW fix?

2021-06-16 Thread Chris Palmer
Hi Our first upgrade (non-cephadm) from Octopus to Pacific 16.0.4 went very smoothly. Thanks for all the effort. The only thing that has bitten us is https://tracker.ceph.com/issues/50556 which prevents a multipart upload to an RGW bucket that has a b

[ceph-users] Re: Ceph Month June Schedule Now Available

2021-06-16 Thread Mike Perez
Hi everyone, Here's today schedule for Ceph Month: 9:00 ET / 15:00 CEST Project Aquarium - An easy-to-use storage appliance wrapped around Ceph [Joao Eduardo Luis] 9:30 ET / 15:30 CEST [lightning] Qemu: librbd vs krbd performance [Wout van Heeswijk] 9:40 ET / 15:40 CEST [lightning] Evaluation of

[ceph-users] RADOSGW Keystone integration - S3 bucket policies targeting not just other tenants / projects ?

2021-06-16 Thread Christian Rohmann
Hallo Ceph-Users, I've been wondering about the state of OpenStack Keystone Auth in RADOSGW. 1) Even though the general documentation on RADOSGW S3 bucket policies is a little "misleading" https://docs.ceph.com/en/latest/radosgw/bucketpolicy/#creation-and-removal in showing users being refer

[ceph-users] Re: Fwd: Re: Issues with Ceph network redundancy using L2 MC-LAG

2021-06-16 Thread mabi
‐‐‐ Original Message ‐‐‐ On Wednesday, June 16, 2021 10:18 AM, Andrew Walker-Brown wrote: > With active mode, you then have a transmit hashing policy, usually set > globally. > > On Linux the bond would be set as ‘bond-mode 802.3ad’ and then > ‘bond-xmit-hash-policy layer3+4’ - or what

[ceph-users] libceph: monX session lost, hunting for new mon

2021-06-16 Thread Magnus HAGDORN
Hi all, I know this came up before but I couldn't find a resolution. We get the error libceph: monX session lost, hunting for new mon a lot on our samba servers that reexport cephfs. A lot means more than once a minute. On other machines that are less busy we get it about every 10-30 minutes. We on

[ceph-users] Re: Ceph monitor won't start after Ubuntu update

2021-06-16 Thread Konstantin Shalygin
Hi, > On 16 Jun 2021, at 01:33, Petr wrote: > > I've upgraded my Ubuntu server from 18.04.5 LTS to Ubuntu 20.04.2 LTS via > 'do-release-upgrade', > during that process ceph packages were upgraded from Luminous to Octopus and > now ceph-mon daemon(I have only one) won't start, log error is: > "

[ceph-users] Re: Strategy for add new osds

2021-06-16 Thread Nico Schottelius
I've to say I am reading quite some interesting strategies in this thread and I'd like to shortly take the time to compare them: 1) one by one osd adding - least amount of pg rebalance - will potentially re-re-balance data that has just been distributed with the next OSD phase in - limits the

[ceph-users] Re: Fwd: Re: Issues with Ceph network redundancy using L2 MC-LAG

2021-06-16 Thread Andrew Walker-Brown
Ideally you’d want to have it the transmit hash the same...but so long load is being pretty evenly spread over all the links in the lag, then you’re fine. Sent from Mail for Windows 10 From: mabi Sent: 16 June 2021 09:29

[ceph-users] Re: Strategy for add new osds

2021-06-16 Thread Anthony D'Atri
> Hi, > > as far as I understand it, > > you get no real benefit with doing them one by one, as each osd add, can > cause a lot of data to be moved to a different osd, even tho you just > rebalanced it. Less than with older releases, but yeah. I’ve known someone who advised against doing

[ceph-users] Re: Fwd: Re: Issues with Ceph network redundancy using L2 MC-LAG

2021-06-16 Thread Serkan Çoban
You cannot do much if the link is flapping or the cable is bad. Maybe you can write some rules to shut the port down on the switch if the error packet ratio goes up. I also remember there are some config on the switch side for link flapping. On Wed, Jun 16, 2021 at 10:57 AM huxia...@horebdata.cn

[ceph-users] Re: Fwd: Re: Issues with Ceph network redundancy using L2 MC-LAG

2021-06-16 Thread Andrew Walker-Brown
Depends on when you configure the switch port. For dell : Interface Ethernet 1/1/20 No switchport Channel-group 10 mode active ! ‘Mode active’ set it as a dynamic lacp lag. Otherwise it would be ‘mode static’ With active mode, you then have a transmit hashing policy, usually set globally. On

[ceph-users] Re: Fwd: Re: Issues with Ceph network redundancy using L2 MC-LAG

2021-06-16 Thread huxia...@horebdata.cn
Is it true that MC-LAG and 803.2ad, by its default, are working on active-active. What else should i take care to ensure fault tolerance when one path is bad? best regards, samuel huxia...@horebdata.cn From: Joe Comeau Date: 2021-06-15 23:44 To: ceph-users@ceph.io Subject: [ceph-users] Fw

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-06-16 Thread Ackermann, Christoph
Good Morning Dan, adjusting "ceph osd setmaxosd 76" solved the problem so far. :-) Thanks and Best regards, Christoph Am Di., 15. Juni 2021 um 21:14 Uhr schrieb Ackermann, Christoph < c.ackerm...@infoserve.de>: > Hi Dan, > > Thanks for the hint, i'll try this tomorrow with a test bed first. T

[ceph-users] ceph osd df return null

2021-06-16 Thread julien lenseigne
Hi, when i do ceph osd df, some osd returns null size. For example :  0   hdd  7.27699  1.0  0B  0B  0B  0B 0B  0B 0    0  29  1   hdd  7.27698  1.0  0B  0B  0B  0B 0B  0B 0    0  23 11   ssd  0.5  1.0  0B  0B  0B