[ceph-users] Re: [Ceph-announce] Re: v16.2.6 Pacific released

2021-09-16 Thread Fyodor Ustinov
Hi! > Correction: Containers live at https://quay.io/repository/ceph/ceph now. > As I understand, command ceph orch upgrade start --ceph-version 16.2.6 is broken and will not be able to update the ceph? root@s-26-9-19-mon-m1:~# ceph orch upgrade start --ceph-version 16.2.6 Initiating upgrade

[ceph-users] Re: v16.2.6 Pacific released

2021-09-16 Thread David Galloway
Correction: Containers live at https://quay.io/repository/ceph/ceph now. On 9/16/21 3:48 PM, David Galloway wrote: > We're happy to announce the 6th backport release in the Pacific series. > We recommend users to update to this release. For a detailed release > notes with links & changelog please

[ceph-users] v16.2.6 Pacific released

2021-09-16 Thread David Galloway
We're happy to announce the 6th backport release in the Pacific series. We recommend users to update to this release. For a detailed release notes with links & changelog please refer to the official blog entry at https://ceph.io/en/news/blog/2021/v16-2-6-pacific-released Notable Changes --

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-16 Thread Eugen Block
You’re absolutely right, of course, the balancer wouldn’t cause degraded PGs. Flapping OSDs seems very likely here. Zitat von Josh Baergen : I assume it's the balancer module. If you write lots of data quickly into the cluster the distribution can vary and the balancer will try to even out t

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-16 Thread Josh Baergen
> I assume it's the balancer module. If you write lots of data quickly > into the cluster the distribution can vary and the balancer will try > to even out the placement. The balancer won't cause degradation, only misplaced objects. > Degraded data redundancy: 260/11856050 objects degraded >

[ceph-users] Re: OSDs unable to mount BlueFS after reboot

2021-09-16 Thread Davíð Steinn Geirsson
On Thu, Sep 16, 2021 at 08:17:48AM +0200, Stefan Kooman wrote: > On 9/16/21 00:09, Davíð Steinn Geirsson wrote: > > > > You might get more information with increasing debug for rocksdb / bluefs > > > / > > > bluestore > > > > > > ceph config set osd.0 debug_rocksdb = 20/20 > > > ceph config set

[ceph-users] Re: BLUEFS_SPILLOVER

2021-09-16 Thread Szabo, Istvan (Agoda)
It's interesting article, so it is fixed size, maybe the article is older. I have another cluster where just 2 nvme drive uses 1.92TB nvme and there also have spillover, but this is the value there: osd.0 spilled over 43 GiB metadata from 'db' device (602 GiB used of 894 GiB) to slow device Se

[ceph-users] Re: cephadm orchestrator not responding after cluster reboot

2021-09-16 Thread Javier Cacheiro
Hi Adam, thanks a lot for your answer, I have tried the "ceph mgr fail" and the active manager has migrated to a different node but "ceph orch" commands continue to hang. # ceph orch status --verbose ... Submitting command: {'prefix': 'orch status', 'target': ('mon-mgr', '')} submit {"prefix": "

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-16 Thread Eugen Block
Hi, I assume it's the balancer module. If you write lots of data quickly into the cluster the distribution can vary and the balancer will try to even out the placement. You can check the status with ceph balancer status and disable it if necessary: ceph balancer mode none Regards, Eugen

[ceph-users] Re: cephadm orchestrator not responding after cluster reboot

2021-09-16 Thread Adam King
Does running "ceph mgr fail" then waiting a bit make the "ceph orch" commands responsive? That's worked for me sometimes before when they wouldn't respond. On Thu, Sep 16, 2021 at 8:08 AM Javier Cacheiro wrote: > Hi, > > I have configured a ceph cluster with the new Pacific version (16.2.4) > us

[ceph-users] Module 'volumes' has failed dependency: /lib/python3/dist-packages/cephfs.cpython-37m-x86_64-linux-gnu.so: undefined symbol: ceph_abort_conn

2021-09-16 Thread Felix Joussein
Hi, after Upgrade Ceph to 15.2.14 on a proxmox 6.4.13 3-node cluster (Debian) I get a health warning: Module 'volumes' has failed dependency: /lib/python3/dist-packages/cephfs.cpython-37m-x86_64-linux-gnu.so: undefined symbol: ceph_abort_conn What am I missing? I have already checked, that pytho

[ceph-users] cephadm orchestrator not responding after cluster reboot

2021-09-16 Thread Javier Cacheiro
Hi, I have configured a ceph cluster with the new Pacific version (16.2.4) using cephadm to see how it performed. Everything went smoothly and the cluster was working fine until I did a ordered shutdown and reboot of the nodes and after that all "ceph orch" commands hang as if they were not able

[ceph-users] Re: rbd freezes/timeout

2021-09-16 Thread Leon Ruumpol
Hello, The first trace of this problem, does anyone have an idea how to proceed? [do sep 16 06:06:15 2021] WARNING: CPU: 5 PID: 12793 at net/ceph/osd_client.c:558 request_reinit+0x12f/0x150 [libceph] [do sep 16 06:06:15 2021] Modules linked in: rpcsec_gss_krb5 auth_rpcgss oid_registry nfsv3 nfs_a

[ceph-users] Re: Health check failed: 1 pools ful

2021-09-16 Thread Frank Schilder
I found it, it has indeed to do with snapshots, but not in the way I thought: at 04:17:39: HEALTH_ERR 20 large omap objects; 1 pools full LARGE_OMAP_OBJECTS 20 large omap objects 20 large objects found in pool 'con-fs2-meta1' Search the cluster log for 'Large omap object found' for more d

[ceph-users] Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-16 Thread Kai Stian Olstad
Hi I'm testing a Ceph cluster with "rados bench", it's an empty Cephadm install that only has one pool device_health_metrics. Create a pool with 1024 pg on the hdd devices(15 servers has HDDs and 13 has SSDs) ceph osd pool create pool-ec32-isa-reed_sol_van-hdd 1024 1024 erasue ec32-isa-r

[ceph-users] Re: Docker & CEPH-CRASH

2021-09-16 Thread Sebastian Wagner
ceph-crash should work, as crash dumps aren't namespaced in the kernel. Note that you need a pid1 process in your containers in order for crash dumps to be created. Am 16.09.21 um 08:57 schrieb Eugen Block: I haven't tried it myself but it would probably work to run the crash services apart fr

[ceph-users] Re: OSDs unable to mount BlueFS after reboot

2021-09-16 Thread Stefan Kooman
On 9/16/21 00:09, Davíð Steinn Geirsson wrote: You might get more information with increasing debug for rocksdb / bluefs / bluestore ceph config set osd.0 debug_rocksdb = 20/20 ceph config set osd.0 debug_bluefs = 20/20 ceph config set osd.0 debug_bluestore = 20/20 These debug tuneables give

[ceph-users] Endpoints part of the zonegroup configuration

2021-09-16 Thread Szabo, Istvan (Agoda)
Hi, In the documentation is not really clear the endpoints under the zone and under the zonegroup "collection" part. 1. If you have a loadbalancer in front of the gateways, should you put the lb in these sections or always put the individual gateway list? Having this configuration: jpst.it/2C

[ceph-users] BLUEFS_SPILLOVER

2021-09-16 Thread Szabo, Istvan (Agoda)
Hi, Something weird happening, I have on 1 nvme drive and 3x SSD's are using for wal and db. The LVM is 596GB but in the health detail is says x GiB spilled over to slow device, however just 317 GB use only :/ [WRN] BLUEFS_SPILLOVER: 3 OSD(s) experiencing BlueFS spillover osd.10 spilled ov

[ceph-users] radosgw find buckets which use the s3website feature

2021-09-16 Thread Boris Behrens
Hi people, is there a way to find bucket that use the s3website feature? Cheers Boris ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io