Looks like it to me, yeah. Not sure why it took so long to get noticed
though (that is, is that behavior present in the release you're using,
or is it a new bug)?
-Greg
On Thu, Feb 11, 2016 at 12:11 PM, Stephen Lord wrote:
>
> I saw this go by in the commit log:
>
> commit cc2200c5e60caecf7931e54
I saw this go by in the commit log:
commit cc2200c5e60caecf7931e546f6522b2ba364227f
Merge: f8d5807 12c083e
Author: Sage Weil
Date: Thu Feb 11 08:44:35 2016 -0500
Merge pull request #7537 from ifed01/wip-no-promote-for-delete-fix
osd: fix unnecessary object promotion when deleting
On Fri, Feb 5, 2016 at 6:39 AM, Stephen Lord wrote:
>
> I looked at this system this morning, and the it actually finished what it was
> doing. The erasure coded pool still contains all the data and the cache
> pool has about a million zero sized objects:
>
>
> GLOBAL:
> SIZE AVAIL R
I looked at this system this morning, and the it actually finished what it was
doing. The erasure coded pool still contains all the data and the cache
pool has about a million zero sized objects:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
15090G 9001G608
On Thu, Feb 4, 2016 at 5:07 PM, Stephen Lord wrote:
>
>> On Feb 4, 2016, at 6:51 PM, Gregory Farnum wrote:
>>
>> I presume we're doing reads in order to gather some object metadata
>> from the cephfs-data pool; and the (small) newly-created objects in
>> cache-data are definitely whiteout objects
> On Feb 4, 2016, at 6:51 PM, Gregory Farnum wrote:
>
> I presume we're doing reads in order to gather some object metadata
> from the cephfs-data pool; and the (small) newly-created objects in
> cache-data are definitely whiteout objects indicating the object no
> longer exists logically.
>
>
On Thu, Feb 4, 2016 at 4:37 PM, Stephen Lord wrote:
> I setup a cephfs file system with a cache tier over an erasure coded tier as
> an experiment:
>
> ceph osd erasure-code-profile set raid6 k=4 m=2
> ceph osd pool create cephfs-metadata 512 512
> ceph osd pool set cephfs-metadata size 3
>
I setup a cephfs file system with a cache tier over an erasure coded tier as an
experiment:
ceph osd erasure-code-profile set raid6 k=4 m=2
ceph osd pool create cephfs-metadata 512 512
ceph osd pool set cephfs-metadata size 3
ceph osd pool create cache-data 2048 2048
ceph osd pool cre