[ceph-users] log_latency_fn slow operation

2020-03-03 Thread 徐蕴
Hi, Our cluster (14.2.6) has sporadic slow ops warnings since upgrading from Jewel 1 month ago. Today I checked the OSD log files and found out a lot of entries like: ceph-osd.5.log:2020-03-04 10:33:31.592 7f18ca41f700 0 bluestore(/var/lib/ceph/osd/ceph-5) log_latency_fn slow operation

[ceph-users] Re: consistency of import-diff

2020-03-03 Thread Stefan Priebe - Profihost AG
Hi, Am 03.03.20 um 20:54 schrieb Jack: > Hi, > > You can use a full local export, piped to some hash program (this is > what Backurne¹ does) : rbd export - | xxhsum > Then, check the hash consistency with the original Thanks for the suggestion but this still needs to run an rbd export on the

[ceph-users] Need clarification on CephFS, EC Pools, and File Layouts

2020-03-03 Thread Dave Hall
Hello, This is for a cluster currently running at 14.2.7.  Since our cluster is still relatively small we feel a strong need to run our CephFS on an EC Pool (8 + 2) and Crush Failure Domain = OSD to maximize capacity. I have read and re-read

[ceph-users] Re: consistency of import-diff

2020-03-03 Thread Jack
Hi, You can use a full local export, piped to some hash program (this is what Backurne¹ does) : rbd export - | xxhsum Then, check the hash consistency with the original Regards, [1] https://github.com/JackSlateur/backurne On 3/3/20 8:46 PM, Stefan Priebe - Profihost AG wrote: > Hello, > >

[ceph-users] consistency of import-diff

2020-03-03 Thread Stefan Priebe - Profihost AG
Hello, does anybody know whether there is any mechanism to make sure an image looks like the original after an import-diff? While doing ceph backups on another ceph cluster i currently do a fresh import every 7 days. So i'm sure if something went wrong with import-diff i have a fresh one every 7

[ceph-users] Fw: Incompatibilities (implicit_tenants & barbican) with Openstack after migrating from Ceph Luminous to Nautilus.

2020-03-03 Thread Scheurer François
(resending to the new maillist) Dear Casey, Dear All, We tested the migration from Luminous to Nautilus and noticed two regressions breaking the RGW integration in Openstack: 1) the following config parameter is not working on Nautilus but is valid on Luminous and on Master:

[ceph-users] Re: Radosgw dynamic sharding jewel -> luminous

2020-03-03 Thread Casey Bodley
The default value of this reshard pool is "default.rgw.log:reshard". You can check 'radosgw-admin zone get' for the list of pool names/namespaces in use. It may be that your log pool is named ".rgw.log" instead, so you could change your reshard_pool to ".rgw.log:reshard" to share that. On

[ceph-users] 14.2.8 Multipart delete still not working

2020-03-03 Thread EDH - Manuel Rios
Hi, We have updated our cluster to 14.2.8 since we suffered the bug https://tracker.ceph.com/issues/43583, now life cycle policies give more information than before. In 14.2.7 they ended instantly so something we have advanced. But they are not yet able to eliminate multipart. Just a line of

[ceph-users] e5 failed to get devid for : udev_device_new_from_subsystem_sysname failed on ''

2020-03-03 Thread Matthias Leopold
Hi, I installed a basic three host cluster with ceph-deploy for development today and I'm seeing messages like mon.somehost@2(electing) e5 failed to get devid for : udev_device_new_from_subsystem_sysname failed on '' in monitor logs when monitor starts up. What is this? I found this:

[ceph-users] Re: leftover: spilled over 128 KiB metadata after adding db device

2020-03-03 Thread Stefan Priebe - Profihost AG
Am 03.03.20 um 15:34 schrieb Rafał Wądołowski: > Stefan, > > What version are you running? 14.2.7 > You wrote "Ceph automatically started to > migrate all date from the hdd to the ssd db device", is that normal auto > compaction or ceph developed a trigger to do it? normal after running

[ceph-users] Re: v14.2.8 Nautilus released

2020-03-03 Thread Marc Roos
This bluestore_min_alloc_size_ssd=4K, do I need to recreate these osd's? Or does this magically change? What % performance increase can be expected? -Original Message- To: ceph-annou...@ceph.io; ceph-users@ceph.io; d...@ceph.io; ceph-de...@vger.kernel.org Subject: [ceph-users]

[ceph-users] Re: Nautilus 14.2.8

2020-03-03 Thread Martin Verges
*cough* use croit to deploy your cluster, then you have a well tested OS+Ceph image and no random version change ;) *cough* -- Martin Verges Managing director Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive Training at

[ceph-users] Re: Restrict client access to a certain rbd pool with seperate metadata and data pool

2020-03-03 Thread Max Krasilnikov
Hello! IFAIK, you have to access replivated pool with default data pool pointing to ec pool like that: [client.user] rbd_default_data_pool = pool.ec Now you can access pool.rbd, but actual data will be placed on pool.ec. Maybe it is another way to specify default data pool for using

[ceph-users] Re: Restrict client access to a certain rbd pool with seperate metadata and data pool

2020-03-03 Thread Jason Dillaman
On Tue, Mar 3, 2020 at 10:05 AM Rainer Krienke wrote: > > Hello, > > I do not know how to restrict a client.user to a certain rbd pool where > this pool has a replicated metadata pool pool.rbd and an erasure coded > data pool named pool.ec . I am running ceph nautilus. > > I tried this for a

[ceph-users] Restrict client access to a certain rbd pool with seperate metadata and data pool

2020-03-03 Thread Rainer Krienke
Hello, I do not know how to restrict a client.user to a certain rbd pool where this pool has a replicated metadata pool pool.rbd and an erasure coded data pool named pool.ec . I am running ceph nautilus. I tried this for a client.user: # ceph auth caps client.user mon 'profile rbd' osd 'profile

[ceph-users] Re: leftover: spilled over 128 KiB metadata after adding db device

2020-03-03 Thread Rafał Wądołowski
Stefan, What version are you running? You wrote "Ceph automatically started to migrate all date from the hdd to the ssd db device", is that normal auto compaction or ceph developed a trigger to do it? Best Regards, Rafał Wądołowski ___ ceph-users

[ceph-users] Re: building ceph Nautilus for Debian Stretch

2020-03-03 Thread Stefan Priebe - Profihost AG
Am 03.03.20 um 08:38 schrieb Thomas Lamprecht: > Hi, > > On 3/3/20 8:01 AM, Stefan Priebe - Profihost AG wrote: >> does anybody have a guide to build ceph Nautilus for Debian stretch? I >> wasn't able to find a backported gcc-8 for stretch. > > That's because a gcc backport isn't to trivial,

[ceph-users] Re: leftover: spilled over 128 KiB metadata after adding db device

2020-03-03 Thread Stefan Priebe - Profihost AG
Nobody who has an idea? Ceph automatically started to migrate all date from the hdd to the ssd db device but has stopped at 128kb on nearly all osds. Greets, Stefan Am 02.03.20 um 10:32 schrieb Stefan Priebe - Profihost AG: > Hello, > > i added a db device to my osds running nautilus. The DB

[ceph-users] v14.2.8 Nautilus released

2020-03-03 Thread Abhishek Lekshmanan
This is the eighth update to the Ceph Nautilus release series. This release fixes issues across a range of subsystems. We recommend that all users upgrade to this release. Please note the following important changes in this release; as always the full changelog is posted at:

[ceph-users] Re: Nautilus 14.2.8

2020-03-03 Thread Fyodor Ustinov
Hi! > I really do not care about these 1-2 days in between, why are you? Do > not install it, configure yum to lock a version, update your local repo > less frequent. I already asked this question - what to do to those who today decide to install the CEPH for the first time? ceph-deploy

[ceph-users] Re: Nautilus 14.2.8

2020-03-03 Thread Marc Roos
I really do not care about these 1-2 days in between, why are you? Do not install it, configure yum to lock a version, update your local repo less frequent. -Original Message- Sent: 03 March 2020 11:22 To: ceph-users Subject: [ceph-users] Nautilus 14.2.8 Hi! Again. New version

[ceph-users] Nautilus 14.2.8

2020-03-03 Thread Fyodor Ustinov
Hi! Again. New version in repository without announce. :( I wonder who needs to write a letter and complain that there would always be an announcement, and then a new version in the repository? WBR, Fyodor. ___ ceph-users mailing list --

[ceph-users] Re: [EXTERNAL] How can I fix "object unfound" error?

2020-03-03 Thread Simone Lazzaris
In data martedì 3 marzo 2020 04:57:35 CET, Steven. Scheit ha scritto: > Can you share "ceph pg 6.36a query" output > Sure, it's attached. *Simone Lazzaris* *Qcom S.p.A. a Socio Unico* Via Roggia Vignola, 9 | 24047 Treviglio (BG)T +39 0363 1970352 | M +39 3938111237

[ceph-users] Re: Octopus release announcement

2020-03-03 Thread Paul Emmerich
On Mon, Mar 2, 2020 at 7:19 PM Alex Chalkias wrote: > > Thanks for the update. Are you doing a beta-release prior to the official > launch? the first RC was tagged a few weeks ago: https://github.com/ceph/ceph/tree/v15.1.0 Paul > > > On Mon, Mar 2, 2020 at 7:12 PM Sage Weil wrote: > > > It's