Re: [ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-23 Thread Patrick Donnelly
On Fri, Feb 23, 2018 at 12:54 AM, Daniel Carrasco  wrote:
>  client_permissions = false

Yes, this will potentially reduce checks against the MDS.

>   client_quota = false

This option no longer exists since Luminous; quota enforcement is no
longer optional. However, if you don't have any quotas then there is
no added load on the client/mds.

-- 
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-23 Thread Daniel Carrasco
Finally, I've changed the configuration to the following:

##
### MDS
##
[mds]
  mds_cache_memory_limit = 792723456
  mds_bal_mode = 1

##
### Client
##
[client]
  client_cache_size = 32768
  client_mount_timeout = 30
  client_oc_max_obijects = 2
  client_oc_size = 1048576000
  client_permissions = false
  client_quota = false
  rbd_cache = true
  rbd_cache_size = 671088640

I've disabled client_permissions and client_quota because the cluster is
only for the webpage and the network is isolated, so it don't need to check
the permissions every time, and I've disabled the quota check because there
is no quota on this cluster.
This will lower the petitions to MDS and CPU usage, right?

Greetings!!

2018-02-22 19:34 GMT+01:00 Patrick Donnelly :

> On Wed, Feb 21, 2018 at 11:17 PM, Daniel Carrasco 
> wrote:
> > I want to search also if there is any way to cache file metadata on
> client,
> > to lower the MDS load. I suppose that files are cached but the client
> check
> > with MDS if there are changes on files. On my server files are the most
> of
> > time read-only so MDS data can be also cached for a while.
>
> The MDS issues capabilities that allow clients to coherently cache
> metadata.
>
> --
> Patrick Donnelly
>



-- 
_

  Daniel Carrasco Marín
  Ingeniería para la Innovación i2TIC, S.L.
  Tlf:  +34 911 12 32 84 Ext: 223
  www.i2tic.com
_
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-22 Thread Patrick Donnelly
On Wed, Feb 21, 2018 at 11:17 PM, Daniel Carrasco  wrote:
> I want to search also if there is any way to cache file metadata on client,
> to lower the MDS load. I suppose that files are cached but the client check
> with MDS if there are changes on files. On my server files are the most of
> time read-only so MDS data can be also cached for a while.

The MDS issues capabilities that allow clients to coherently cache metadata.

-- 
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-21 Thread Daniel Carrasco
Thanks, I'll check it.

I want to search also if there is any way to cache file metadata on client,
to lower the MDS load. I suppose that files are cached but the client check
with MDS if there are changes on files. On my server files are the most of
time read-only so MDS data can be also cached for a while.

Greetings!!

El 22 feb. 2018 3:59, "Patrick Donnelly"  escribió:

> Hello Daniel,
>
> On Wed, Feb 21, 2018 at 10:26 AM, Daniel Carrasco 
> wrote:
> > Is possible to make a better distribution on the MDS load of both nodes?.
>
> We are aware of bugs with the balancer which are being worked on. You
> can also manually create a partition if the workload can benefit:
>
> https://ceph.com/community/new-luminous-cephfs-subtree-pinning/
>
> > Is posible to set all nodes as Active without problems?
>
> No. I recommend you read the docs carefully:
>
> http://docs.ceph.com/docs/master/cephfs/multimds/
>
> > My last question is if someone can recomend me a good client
> configuration
> > like cache size, and maybe something to lower the metadata servers load.
>
> >>
> >> ##
> >> [mds]
> >>  mds_cache_size = 25
> >>  mds_cache_memory_limit = 792723456
>
> You should only specify one of those. See also:
>
> http://docs.ceph.com/docs/master/cephfs/cache-size-limits/
>
> --
> Patrick Donnelly
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-21 Thread Patrick Donnelly
Hello Daniel,

On Wed, Feb 21, 2018 at 10:26 AM, Daniel Carrasco  wrote:
> Is possible to make a better distribution on the MDS load of both nodes?.

We are aware of bugs with the balancer which are being worked on. You
can also manually create a partition if the workload can benefit:

https://ceph.com/community/new-luminous-cephfs-subtree-pinning/

> Is posible to set all nodes as Active without problems?

No. I recommend you read the docs carefully:

http://docs.ceph.com/docs/master/cephfs/multimds/

> My last question is if someone can recomend me a good client configuration
> like cache size, and maybe something to lower the metadata servers load.

>>
>> ##
>> [mds]
>>  mds_cache_size = 25
>>  mds_cache_memory_limit = 792723456

You should only specify one of those. See also:

http://docs.ceph.com/docs/master/cephfs/cache-size-limits/

-- 
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-21 Thread Daniel Carrasco
2018-02-21 19:26 GMT+01:00 Daniel Carrasco :

> Hello,
>
> I've created a Ceph cluster with 3 nodes to serve files to an high traffic
> webpage. I've configured two MDS as active and one as standby, but after
> add the new system to production I've noticed that MDS are not balanced and
> one server get the most of clients petitions (One MDS about 700 or less vs
> 4.000 or more the other).
>
> Is possible to make a better distribution on the MDS load of both nodes?.
> Is posible to set all nodes as Active without problems?
>
> I know that is possible to set max_mds to 3 and all will be active, but I
> want to know what happen if one node goes down for example, or if there are
> another side effects.
>
>
> My last question is if someone can recomend me a good client configuration
> like cache size, and maybe something to lower the metadata servers load.
>
>
> Thanks!!
>

I forgot to say my configuration xD.

I've a three nodes cluster with AIO:

   - 3 Monitors
   - 3 OSD
   - 3 MDS (2 actives and one standby)
   - 3 MGR (1 active)

The data has 3 copies, so is in every node.

My configuration file is:
[global]
fsid = BlahBlahBlah
mon_initial_members = fs-01, fs-02, fs-03
mon_host = 192.168.4.199,192.168.4.200,192.168.4.201
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 192.168.4.0/24
osd pool default size = 3


##
### OSD
##
[osd]
  osd_pool_default_pg_num = 128
  osd_pool_default_pgp_num = 128
  osd_pool_default_size = 3
  osd_pool_default_min_size = 2

  osd_mon_heartbeat_interval = 5
  osd_mon_report_interval_max = 10
  osd_heartbeat_grace = 15
  osd_fast_fail_on_connection_refused = True


##
### MON
##
[mon]
  mon_osd_min_down_reporters = 2

##
### MDS
##
[mds]
  mds_cache_size = 25
  mds_cache_memory_limit = 792723456

##
### Client
##
[client]
  client_cache_size = 32768
  client_mount_timeout = 30
  client_oc_max_objects = 2000
  client_oc_size = 629145600
  rbd_cache = true
  rbd_cache_size = 671088640


Thanks!!!

-- 
_

  Daniel Carrasco Marín
  Ingeniería para la Innovación i2TIC, S.L.
  Tlf:  +34 911 12 32 84 Ext: 223
  www.i2tic.com
_
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-21 Thread Daniel Carrasco
Hello,

I've created a Ceph cluster with 3 nodes to serve files to an high traffic
webpage. I've configured two MDS as active and one as standby, but after
add the new system to production I've noticed that MDS are not balanced and
one server get the most of clients petitions (One MDS about 700 or less vs
4.000 or more the other).

Is possible to make a better distribution on the MDS load of both nodes?.
Is posible to set all nodes as Active without problems?

I know that is possible to set max_mds to 3 and all will be active, but I
want to know what happen if one node goes down for example, or if there are
another side effects.


My last question is if someone can recomend me a good client configuration
like cache size, and maybe something to lower the metadata servers load.


Thanks!!
-- 
_

  Daniel Carrasco Marín
  Ingeniería para la Innovación i2TIC, S.L.
  Tlf:  +34 911 12 32 84 Ext: 223
  www.i2tic.com
_
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com