Re: [ceph-users] cephfs degraded on ceph luminous 12.2.2

2018-01-08 Thread Sergey Malinin
You cannot force mds quit "replay" state for obvious reason of keeping data consistent. You might raise mds_beacon_grace to a somewhat reasonable value that would allow MDS to replay the journal without being marked laggy and eventually blacklisted. From:

Re: [ceph-users] Bad crc causing osd hang and block all request.

2018-01-08 Thread Konstantin Shalygin
What could cause this problem?Is this caused by a faulty HDD? what data's crc didn't match ? This may be caused due faulty drive. Check your dmesg. ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Safe to delete data, metadata pools?

2018-01-08 Thread John Spray
On Mon, Jan 8, 2018 at 2:55 AM, Richard Bade wrote: > Hi Everyone, > I've got a couple of pools that I don't believe are being used but > have a reasonably large number of pg's (approx 50% of our total pg's). > I'd like to delete them but as they were pre-existing when I

[ceph-users] Luminous : All OSDs not starting when ceph.target is started

2018-01-08 Thread nokia ceph
Hello, i have installed Luminous 12.2.2 on a 5 node cluster with logical volume OSDs. I am trying to stop and start ceph on one of the nodes using systemctl commands. *systemctl stop ceph.target; systemctl start ceph.target* When i stop ceph, all OSDs are stopped on the node properly. But when i

Re: [ceph-users] ceph-volume error messages

2018-01-08 Thread Alfredo Deza
On Mon, Dec 11, 2017 at 2:26 AM, Martin, Jeremy wrote: > Hello, > > > > We are currently doing some evaluations on a few storage technologies and > ceph has made it on our short list but the issue is we haven’t been able to > evaluate it as I can’t seem to get it to deploy out.

Re: [ceph-users] fail to create bluestore osd with ceph-volume command on ubuntu 14.04 with ceph 12.2.2

2018-01-08 Thread Alfredo Deza
ceph-volume relies on systemd and 14.04 does not have that available. You will need to upgrade to a distro version that supports systemd On Wed, Dec 13, 2017 at 11:37 AM, Stefan Kooman wrote: > Quoting 姜洵 (jiang...@100tal.com): >> Hi folks, >> >> >> I am trying to install create a

[ceph-users] Move an erasure coded RBD image to another pool.

2018-01-08 Thread Caspar Smit
Hi all, I've migrated all of my replicated rbd images to erasure coded images using "rbd copy" with the "--data-pool" parameter. So i now have a replicated pool with 4K pgs, that is only storing RBD headers and metadata. RBD data is stored on the erasure pool. Now i would like to move the

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-08 Thread Youzhong Yang
Hi Yehuda, Thanks for replying. >radosgw failed to connect to your ceph cluster. Does the rados command >with the same connection params work? I am not quite sure what to do by running rados command to test. So I tried again, could you please take a look and check what could have gone wrong?

Re: [ceph-users] Ceph on Public IP

2018-01-08 Thread John Petrini
ceph will always bind to the local IP. It can't bind to an IP that isn't assigned directly to the server such as a NAT'd IP. So your public network should be the local network that's configured on each server. If you cluster network is 10.128.0.0/16 for instance your public network might be

Re: [ceph-users] MDS cache size limits

2018-01-08 Thread Marc Roos
I guess the mds cache holds files, attributes etc but how many files will the default "mds_cache_memory_limit": "1073741824" hold? -Original Message- From: Stefan Kooman [mailto:ste...@bit.nl] Sent: vrijdag 5 januari 2018 12:54 To: Patrick Donnelly Cc: Ceph Users Subject: Re:

Re: [ceph-users] MDS cache size limits

2018-01-08 Thread Sergey Malinin
In my experience 1GB cache could hold roughly 400k inodes. _ From: Marc Roos Sent: Monday, January 8, 2018 23:02 Subject: Re: [ceph-users] MDS cache size limits To: pdonnell , stefan Cc: ceph-users

Re: [ceph-users] C++17 and C++ ABI on master

2018-01-08 Thread Sage Weil
On Mon, 8 Jan 2018, Adam C. Emerson wrote: > Good day, > > I've just merged some changs into master that set us up to compile > with C++17. This will require a reasonably new compiler to build > master. Yay! > Due to a change in how 'noexcept' is handled (it is now part of the type > signature

[ceph-users] C++17 and C++ ABI on master

2018-01-08 Thread Adam C. Emerson
Good day, I've just merged some changs into master that set us up to compile with C++17. This will require a reasonably new compiler to build master. Due to a change in how 'noexcept' is handled (it is now part of the type signature of a function), mangled symbol names of noexcept functions are

[ceph-users] Removing cache tier for RBD pool

2018-01-08 Thread Jens-U. Mozdzen
Hi *, trying to remove a caching tier from a pool used for RBD / Openstack, we followed the procedure from http://docs.ceph.com/docs/master/rados/operations/cache-tiering/#removing-a-writeback-cache and ran into problems. The cluster is currently running Ceph 12.2.2, the caching tier was

[ceph-users] Limitting logging to syslog server

2018-01-08 Thread Marc Roos
On a default luminous test cluster I would like to limit logging of I guess successful notifications related to deleted snapshots. I don’t need there 77k messages of these in my syslog server. What/where would be the best to place to do this? (but not dumping it at syslog) Jan 8 13:11:54

Re: [ceph-users] Increase recovery / backfilling speed (with many small objects)

2018-01-08 Thread Mark Schouten
On maandag 8 januari 2018 11:34:22 CET Stefan Kooman wrote: > Thanks. If forgot to mention I already increased that setting to "10" > (and eventually 50). It will increase the speed a little bit: from 150 > objects /s to ~ 400 objects / s. It would still take days for the cluster > to recover.

Re: [ceph-users] "VolumeDriver.Create: Unable to create Ceph RBD Image"

2018-01-08 Thread Jason Dillaman
If you are using a pre-created RBD image for this, you will need to disable all the image features that krbd doesn't support: # rbd feature disable dummy01 exclusive-lock,object-map,fast-diff,deep-flatten On Sun, Jan 7, 2018 at 11:36 AM, Traiano Welcome wrote: > Hi List > >

Re: [ceph-users] Linux Meltdown (KPTI) fix and how it affects performance?

2018-01-08 Thread Paul Ashman
Graham, The before/after FIO tests sound interesting, we’re trying to pull together some benchmark tests to do the same for our Ceph cluster. Could you expand on which parameters you used, and how the file size relates to the RAM available to your VM? Regards, Paul Ashman

Re: [ceph-users] Increase recovery / backfilling speed (with many small objects)

2018-01-08 Thread Stefan Kooman
Quoting Chris Sarginson (csarg...@gmail.com): > You probably want to consider increasing osd max backfills > > You should be able to inject this online > > http://docs.ceph.com/docs/luminous/rados/configuration/osd-config-ref/ > > You might want to drop your osd recovery max active settings

[ceph-users] How to remove deactivated cephFS

2018-01-08 Thread Eugen Block
Hi list, all this is on Ceph 12.2.2. An existing cephFS (named "cephfs") was backed up as a tar ball, then "removed" ("ceph fs rm cephfs --yes-i-really-mean-it"), a new one created ("ceph fs new cephfs cephfs-metadata cephfs-data") and the content restored from the tar ball. According to

[ceph-users] WAL size constraints, bluestore_prefer_deferred_size

2018-01-08 Thread Richard Hesketh
I recently came across the bluestore_prefer_deferred_size family of config options, for controlling the upper size threshold on deferred writes. Given a number of users suggesting that write performance in filestore is better than write performance in bluestore - because filestore writing to an

Re: [ceph-users] ceph-volume does not support upstart

2018-01-08 Thread Alfredo Deza
ceph-volume relies on systemd, it will not work with upstart. Going the fstab way might work, but most of the lvm implementation will want to do systemd-related calls like enabling units and placing files. For upstart you might want to keep using ceph-disk, unless upgrading to a newer OS is an

Re: [ceph-users] ceph-volume lvm deactivate/destroy/zap

2018-01-08 Thread Dan van der Ster
On Mon, Jan 8, 2018 at 4:37 PM, Alfredo Deza wrote: > On Thu, Dec 21, 2017 at 11:35 AM, Stefan Kooman wrote: >> Quoting Dan van der Ster (d...@vanderster.com): >>> Thanks Stefan. But isn't there also some vgremove or lvremove magic >>> that needs to bring down

Re: [ceph-users] ceph-volume lvm deactivate/destroy/zap

2018-01-08 Thread Alfredo Deza
On Thu, Dec 21, 2017 at 11:35 AM, Stefan Kooman wrote: > Quoting Dan van der Ster (d...@vanderster.com): >> Thanks Stefan. But isn't there also some vgremove or lvremove magic >> that needs to bring down these /dev/dm-... devices I have? > > Ah, you want to clean up properly before

[ceph-users] cephfs degraded on ceph luminous 12.2.2

2018-01-08 Thread Alessandro De Salvo
Hi, I'm running on ceph luminous 12.2.2 and my cephfs suddenly degraded. I have 2 active mds instances and 1 standby. All the active instances are now in replay state and show the same error in the logs: mds1 2018-01-08 16:04:15.765637 7fc2e92451c0  0 ceph version 12.2.2

Re: [ceph-users] cephfs degraded on ceph luminous 12.2.2

2018-01-08 Thread Lincoln Bryant
Hi Alessandro, What is the state of your PGs? Inactive PGs have blocked CephFS recovery on our cluster before. I'd try to clear any blocked ops and see if the MDSes recover. --Lincoln On Mon, 2018-01-08 at 17:21 +0100, Alessandro De Salvo wrote: > Hi, > > I'm running on ceph luminous 12.2.2

[ceph-users] Bad crc causing osd hang and block all request.

2018-01-08 Thread shadow_lin
Hi lists, ceph version:luminous 12.2.2 The cluster was doing writing thoughput test when this problem happened. The cluster health became error Health check update: 27 stuck requests are blocked > 4096 sec (REQUEST_STUCK) Clients can't write any data into cluster. osd22 and osd40 are the osds

Re: [ceph-users] ceph-volume lvm deactivate/destroy/zap

2018-01-08 Thread Alfredo Deza
On Mon, Jan 8, 2018 at 10:53 AM, Dan van der Ster wrote: > On Mon, Jan 8, 2018 at 4:37 PM, Alfredo Deza wrote: >> On Thu, Dec 21, 2017 at 11:35 AM, Stefan Kooman wrote: >>> Quoting Dan van der Ster (d...@vanderster.com): Thanks Stefan.

Re: [ceph-users] cephfs degraded on ceph luminous 12.2.2

2018-01-08 Thread Alessandro De Salvo
Thanks Lincoln, indeed, as I said the cluster is recovering, so there are pending ops:     pgs: 21.034% pgs not active 1692310/24980804 objects degraded (6.774%) 5612149/24980804 objects misplaced (22.466%) 458 active+clean 329