>>> We have been using:
>>>
>>> osd op queue = wpq
>>> osd op queue cut off = high
>>>
>>> It virtually eliminates the impact of backfills on our clusters. Our
>
> It does better because it is a fair share queue and doesn't let recovery
> ops take priority over client ops at any point for any
Question: If you have enough osds it seems an almost daily thing when
you get to work in the morning there' s a "ceph health error" "1 pg
inconsistent" arising from a 'scrub error'. Or 2, etc. Then like
most such mornings you look to see there's two or more valid instances
of the pg and
Hello,
On Sun, 4 Aug 2019 06:34:46 -0500 Mark Nelson wrote:
> On 8/4/19 6:09 AM, Paul Emmerich wrote:
>
> > On Sun, Aug 4, 2019 at 3:47 AM Christian Balzer wrote:
> >
> >> 2. Bluestore caching still broken
> >> When writing data with the fios below, it isn't cached on the OSDs.
> >> Worse,
On Fri, Aug 2, 2019 at 12:13 AM Pierre Dittes wrote:
>
> Hi,
> we had some major up with our CephFS. Long story short..no Journal backup
> and journal was truncated.
> Now..I still see a metadata pool with all objects and datapool is fine, from
> what I know neither was corrupted. Last
If all you want to do is repair the pg when it finds an inconsistent pg,
you could set osd_scrub_auto_repair to true.
On Sun, Aug 4, 2019, 9:16 AM Harry G. Coin wrote:
> Question: If you have enough osds it seems an almost daily thing when
> you get to work in the morning there' s a "ceph
Question: If you have enough osds it seems an almost daily thing when
you get to work in the morning there' s a "ceph health error" "1 pg
inconsistent" arising from a 'scrub error'. Or 2, etc. Then like
most such mornings you look to see there's two or more valid instances
of the pg
Afaik no. What's the idea of running a single-host cephfs cluster?
4 августа 2019 г. 13:27:00 GMT+03:00, Eitan Mosenkis пишет:
>I'm running a single-host Ceph cluster for CephFS and I'd like to keep
>backups in Amazon S3 for disaster recovery. Is there a simple way to
>extract a CephFS snapshot
On 8/4/19 6:09 AM, Paul Emmerich wrote:
On Sun, Aug 4, 2019 at 3:47 AM Christian Balzer wrote:
2. Bluestore caching still broken
When writing data with the fios below, it isn't cached on the OSDs.
Worse, existing cached data that gets overwritten is removed from the
cache, which while of
On Sun, Aug 4, 2019 at 3:47 AM Christian Balzer wrote:
> 2. Bluestore caching still broken
> When writing data with the fios below, it isn't cached on the OSDs.
> Worse, existing cached data that gets overwritten is removed from the
> cache, which while of course correct can't be free in terms
I'm running a single-host Ceph cluster for CephFS and I'd like to keep
backups in Amazon S3 for disaster recovery. Is there a simple way to
extract a CephFS snapshot as a single file and/or to create a file that
represents the incremental difference between two snapshots?
Hi Paul,
Okay, thanks for clarifying. If we see the phenomenon again, we'll just
leave it be.
K.
On 03-08-2019 14:33, Paul Emmerich wrote:
> The usual reason for blacklisting RBD clients is breaking an exclusive
> lock because the previous owner seemed to have crashed.
> Blacklisting the old
11 matches
Mail list logo