Thanks John, I removed these pools on Friday and as you suspected
there was no impact.

Regards,
Rich

On 8 January 2018 at 23:15, John Spray <jsp...@redhat.com> wrote:
> On Mon, Jan 8, 2018 at 2:55 AM, Richard Bade <hitr...@gmail.com> wrote:
>> Hi Everyone,
>> I've got a couple of pools that I don't believe are being used but
>> have a reasonably large number of pg's (approx 50% of our total pg's).
>> I'd like to delete them but as they were pre-existing when I inherited
>> the cluster, I wanted to make sure they aren't needed for anything
>> first.
>> Here's the details:
>> POOLS:
>>     NAME                   ID     USED       %USED     MAX AVAIL     OBJECTS
>>     data                   0           0         0        88037G            0
>>     metadata               1           0         0        88037G            0
>>
>> We don't run cephfs and I believe these are meant for that, but may
>> have been created by default when the cluster was set up (back on
>> dumpling or bobtail I think).
>> As far as I can tell there is no data in them. Do they need to exist
>> for some ceph function?
>> The pool names worry me a little, as they sound important.
>
> The data and metadata pools were indeed created by default in older
> versions of Ceph, for use by CephFS.  Since you're not using CephFS,
> and nobody is using the pools for anything else either (they're
> empty), you can go ahead and delete them.
>
>>
>> They have 3136 pg's each so I'd like to be rid of those so I can
>> increase the number of pg's in my actual data pools without getting
>> over the 300 pg's per osd.
>> Here's the osd dump:
>> pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>> rjenkins pg_num 3136 pgp_num 3136 last_change 1 crash_replay_interval
>> 45 min_read_recency_for_promote 1 min_write_recency_for_promote 1
>> stripe_width 0
>> pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 1
>> object_hash rjenkins pg_num 3136 pgp_num 3136 last_change 1
>> min_read_recency_for_promote 1 min_write_recency_for_promote 1
>> stripe_width 0
>>
>> Also, what performance impact am I likely to see when ceph removes the
>> empty pg's considering it's approx 50% of my total pg's on my 180
>> osd's.
>
> Given that they're empty, I'd expect little if any noticeable impact.
>
> John
>
>>
>> Thanks,
>> Rich
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to