could not give any ioprio for disks read or writes, and
> additionaly
> bcache cache was poisoned by scrub/rebalance.
>
> Fortunately to me, it is very easy to rolling replace OSDs.
> I use some SSDs partitions for journal now and what left for pure ssd
> storage.
> Thi
within a pool. At which point this is bad at the point a
snapshot gets removed, snapid would get put into removed_snaps, and at some
point the osd's would go trimming and might prematurely get rid of clones.
Cheers,
--
Kjetil Joergensen
SRE, Medalli
.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Kjetil Joergensen
SRE, Medallia Inc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
; --
> >>> | BIT BV http://www.bit.nl/Kamer van Koophandel 09090351
> >>> | GPG: 0xD14839C6 +31 318 648 688 / i...@bit.nl
> >>> ___
> >>> ceph-users mailing list
> >>> ceph-users@list
Hi,
addendum: We're running 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b).
The workload is a mix of 3xreplicated & ec-coded (rbd, cephfs, rgw).
-KJ
On Tue, Mar 6, 2018 at 3:53 PM, Kjetil Joergensen
wrote:
> Hi,
>
> so.. +1
>
> We don't run compression as fa
predating a stable bluestore by some amount.
>>
>
>
> 12.2.2 -> 12.2.4 at 2018/03/10: I don't see increase of memory usage. No
> any compressions of course.
>
>
> http://storage6.static.itmages.com/i/18/0319/h_1521453809_
> 91314
an a single OSD
> heartbeat failing to produce and _actual_ failure, so as to prevent false
> positives. Thanks for the insight!
>
>
>
> --
> Andre Goree
> -=-=-=-=-=-
> Email - andre at drenet.net
> Website - http://blog.drenet.net
> PGP key - http://www.drenet.net/
o recover. That pg is part of a
> cephfs_data pool with compression set to force/snappy.
>
> Does anyone have an suggestions?
>
> -Michael
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/list
detect possible issues due
> to having too many PGs?
>
> Thanks,
>
> Vlad
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Kjetil Joergensen
SRE, Medallia Inc
___
ncremental_map 1207929 with features 504412504116439552
> 2018-12-26 19:51:46.232523 7fffee40c700 20 mon.cephmon00@0(leader).osd
> e1208017 build_incrementalinc 1207929 175613 bytes
> ... a lot more of reencode_incremental stuff ...
> 2018-12-26 19:51:46.745394 7fffee40c700 10 mon.ce
m is.
>
>
> [1]
> https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/data-center-ssds/dc-p4600-series/dc-p4600-3-2tb-2-5inch-3d1.html
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.c
, it
crept up on us and
didn't appear on our radars until we had blocked requests an osd for
minutes ending up
affecting i.e. rbd.
> Background:
> https://www.spinics.net/lists/ceph-users/msg51985.html
> http://tracker.ceph.com/issues/38849
>
> thanks,
> Ben
> _
12 matches
Mail list logo