Re: [ceph-users] using Bcache on blueStore

2017-10-13 Thread Kjetil Joergensen
could not give any ioprio for disks read or writes, and > additionaly > bcache cache was poisoned by scrub/rebalance. > > Fortunately to me, it is very easy to rolling replace OSDs. > I use some SSDs partitions for journal now and what left for pure ssd > storage. > Thi

[ceph-users] Duplicate snapid's

2017-11-30 Thread Kjetil Joergensen
within a pool. At which point this is bad at the point a snapshot gets removed, snapid would get put into removed_snaps, and at some point the osd's would go trimming and might prematurely get rid of clones. Cheers, -- Kjetil Joergensen SRE, Medalli

Re: [ceph-users] bcache, dm-cache support

2018-10-10 Thread Kjetil Joergensen
.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Kjetil Joergensen SRE, Medallia Inc ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Memory leak in Ceph OSD?

2018-03-06 Thread Kjetil Joergensen
; -- > >>> | BIT BV http://www.bit.nl/Kamer van Koophandel 09090351 > >>> | GPG: 0xD14839C6 +31 318 648 688 / i...@bit.nl > >>> ___ > >>> ceph-users mailing list > >>> ceph-users@list

Re: [ceph-users] Memory leak in Ceph OSD?

2018-03-06 Thread Kjetil Joergensen
Hi, addendum: We're running 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b). The workload is a mix of 3xreplicated & ec-coded (rbd, cephfs, rgw). -KJ On Tue, Mar 6, 2018 at 3:53 PM, Kjetil Joergensen wrote: > Hi, > > so.. +1 > > We don't run compression as fa

Re: [ceph-users] Memory leak in Ceph OSD?

2018-03-21 Thread Kjetil Joergensen
predating a stable bluestore by some amount. >> > > > 12.2.2 -> 12.2.4 at 2018/03/10: I don't see increase of memory usage. No > any compressions of course. > > > http://storage6.static.itmages.com/i/18/0319/h_1521453809_ > 91314

Re: [ceph-users] Random individual OSD failures with "connection refused reported by" another OSD?

2018-03-28 Thread Kjetil Joergensen
an a single OSD > heartbeat failing to produce and _actual_ failure, so as to prevent false > positives. Thanks for the insight! > > > > -- > Andre Goree > -=-=-=-=-=- > Email - andre at drenet.net > Website - http://blog.drenet.net > PGP key - http://www.drenet.net/

Re: [ceph-users] Have an inconsistent PG, repair not working

2018-04-02 Thread Kjetil Joergensen
o recover. That pg is part of a > cephfs_data pool with compression set to force/snappy. > > Does anyone have an suggestions? > > -Michael > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/list

Re: [ceph-users] How many PGs per OSD is too many?

2018-11-14 Thread Kjetil Joergensen
detect possible issues due > to having too many PGs? > > Thanks, > > Vlad > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Kjetil Joergensen SRE, Medallia Inc ___

Re: [ceph-users] cephfs kernel client instability

2019-01-15 Thread Kjetil Joergensen
ncremental_map 1207929 with features 504412504116439552 > 2018-12-26 19:51:46.232523 7fffee40c700 20 mon.cephmon00@0(leader).osd > e1208017 build_incrementalinc 1207929 175613 bytes > ... a lot more of reencode_incremental stuff ... > 2018-12-26 19:51:46.745394 7fffee40c700 10 mon.ce

Re: [ceph-users] Intel P4600 3.2TB U.2 form factor NVMe firmware problems causing dead disks

2019-03-04 Thread Kjetil Joergensen
m is. > > > [1] > https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/data-center-ssds/dc-p4600-series/dc-p4600-3-2tb-2-5inch-3d1.html > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.c

Re: [ceph-users] Limits of mds bal fragment size max

2019-04-16 Thread Kjetil Joergensen
, it crept up on us and didn't appear on our radars until we had blocked requests an osd for minutes ending up affecting i.e. rbd. > Background: > https://www.spinics.net/lists/ceph-users/msg51985.html > http://tracker.ceph.com/issues/38849 > > thanks, > Ben > _