Re: [ceph-users] Any recommendations for CephFS metadata/data pool sizing?

2017-07-17 Thread Riccardo Murri
(David Turner, Mon, Jul 03, 2017 at 03:12:28PM +:)
> I would also recommend keeping each pool at base 2 numbers of PGs.  So with
> the 512 PGs example, do 512 PGs for the data pool and 64 PGs for the
> metadata pool.

Thanks for all the suggestions!

Eventually I went with a 1:7 metadata:data split as a default for
testing the FS.

Thanks,
Riccardo

-- 
Riccardo Murri / Email: riccardo.mu...@gmail.com / Tel.: +41 77 458 98 32
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Any recommendations for CephFS metadata/data pool sizing?

2017-07-03 Thread David Turner
I would also recommend keeping each pool at base 2 numbers of PGs.  So with
the 512 PGs example, do 512 PGs for the data pool and 64 PGs for the
metadata pool.

On Sat, Jul 1, 2017 at 9:01 AM Wido den Hollander  wrote:

>
> > Op 1 juli 2017 om 1:04 schreef Tu Holmes :
> >
> >
> > I would use the calculator at ceph and just set for "all in one".
> >
> > http://ceph.com/pgcalc/
> >
>
> I wouldn't do that. With CephFS the data pool(s) will contain much more
> objects and data then the metadata pool.
>
> You can easily have 1024 PGs for the metadata pool and 8192 for the data
> pool for example.
>
> With the example of 512 PGs in total I'd assign 64 to the metadata pool
> and the rest to the data pool.
>
> Wido
>
> >
> > On Fri, Jun 30, 2017 at 6:45 AM Riccardo Murri  >
> > wrote:
> >
> > > Hello!
> > >
> > > Are there any recommendations for how many PGs to allocate to a CephFS
> > > meta-data pool?
> > >
> > > Assuming a simple case of a cluster with 512 PGs, to be distributed
> > > across the FS data and metadata pools, how would you make the split?
> > >
> > > Thanks,
> > > Riccardo
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Any recommendations for CephFS metadata/data pool sizing?

2017-07-01 Thread Wido den Hollander

> Op 1 juli 2017 om 1:04 schreef Tu Holmes :
> 
> 
> I would use the calculator at ceph and just set for "all in one".
> 
> http://ceph.com/pgcalc/
> 

I wouldn't do that. With CephFS the data pool(s) will contain much more objects 
and data then the metadata pool.

You can easily have 1024 PGs for the metadata pool and 8192 for the data pool 
for example.

With the example of 512 PGs in total I'd assign 64 to the metadata pool and the 
rest to the data pool.

Wido

> 
> On Fri, Jun 30, 2017 at 6:45 AM Riccardo Murri 
> wrote:
> 
> > Hello!
> >
> > Are there any recommendations for how many PGs to allocate to a CephFS
> > meta-data pool?
> >
> > Assuming a simple case of a cluster with 512 PGs, to be distributed
> > across the FS data and metadata pools, how would you make the split?
> >
> > Thanks,
> > Riccardo
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Any recommendations for CephFS metadata/data pool sizing?

2017-06-30 Thread Tu Holmes
I would use the calculator at ceph and just set for "all in one".

http://ceph.com/pgcalc/


On Fri, Jun 30, 2017 at 6:45 AM Riccardo Murri 
wrote:

> Hello!
>
> Are there any recommendations for how many PGs to allocate to a CephFS
> meta-data pool?
>
> Assuming a simple case of a cluster with 512 PGs, to be distributed
> across the FS data and metadata pools, how would you make the split?
>
> Thanks,
> Riccardo
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com