[ceph-users] mds_cache_memory_limit

2018-09-11 Thread marc-antoine desrochers
Hi,

 

Is there any recommendation for the mds_cache_memory_limit ? Like a % of the
total ram or something ?

 

Thanks.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Need help

2018-09-10 Thread marc-antoine desrochers
Hi,

 

I am currently running a ceph cluster running in CEPHFS with 3 nodes each
have 6 osd's except 1 who got 5. I got 3 mds : 2 active and 1 standby, 3
mon.

 

 

[root@ceph-n1 ~]# ceph -s

  cluster:

id: 1d97aa70-2029-463a-b6fa-20e98f3e21fb

health: HEALTH_WARN

3 clients failing to respond to capability release

2 MDSs report slow requests

 

  services:

mon: 3 daemons, quorum ceph-n1,ceph-n2,ceph-n3

mgr: ceph-n1(active), standbys: ceph-n2, ceph-n3

mds: cephfs-2/2/2 up  {0=ceph-n1=up:active,1=ceph-n2=up:active}, 1
up:standby

osd: 17 osds: 17 up, 17 in

 

  data:

pools:   2 pools, 1024 pgs

objects: 541k objects, 42006 MB

usage:   143 GB used, 6825 GB / 6969 GB avail

pgs: 1024 active+clean

 

  io:

client:   32980 B/s rd, 77295 B/s wr, 5 op/s rd, 14 op/s wr



I'm using the cephFs as a mail storage. I currently have 3500 mailbox some
of them are IMAP the others are POP3 the goal is to be able to migrate all
mailbox from my old 

 

infrastructure so around 30 000 mailbox.

 

I'm now facing a problem : 

MDS_CLIENT_LATE_RELEASE 3 clients failing to respond to capability release

mdsceph-n1(mds.0): Client mda3.sogetel.net failing to respond to
capability releaseclient_id: 1134426

mdsceph-n1(mds.0): Client mda2.sogetel.net failing to respond to
capability releaseclient_id: 1172391

mdsceph-n2(mds.1): Client mda3.sogetel.net failing to respond to
capability releaseclient_id: 1134426

MDS_SLOW_REQUEST 2 MDSs report slow requests

mdsceph-n1(mds.0): 112 slow requests are blocked > 30 sec

mdsceph-n2(mds.1): 323 slow requests are blocked > 30 sec

 

I can't figure out how to fix this. 

 

Here some information's about my cluster :



I'm running ceph luminous 12.2.5 on my 3 ceph nodes : ceph-n1, ceph-n2,
ceph-n3.


I have 3 client identical :

LSB Version::core-4.1-amd64:core-4.1-noarch

Distributor ID: Fedora

Description:Fedora release 25 (Twenty Five)

Release:25

Codename:   TwentyFive

 

My ceph nodes :

 

CentOS Linux release 7.5.1804 (Core)

NAME="CentOS Linux"

VERSION="7 (Core)"

ID="centos"

ID_LIKE="rhel fedora"

VERSION_ID="7"

PRETTY_NAME="CentOS Linux 7 (Core)"

ANSI_COLOR="0;31"

CPE_NAME="cpe:/o:centos:centos:7"

HOME_URL="https://www.centos.org/;

BUG_REPORT_URL="https://bugs.centos.org/;

 

CENTOS_MANTISBT_PROJECT="CentOS-7"

CENTOS_MANTISBT_PROJECT_VERSION="7"

REDHAT_SUPPORT_PRODUCT="centos"

REDHAT_SUPPORT_PRODUCT_VERSION="7"

 

CentOS Linux release 7.5.1804 (Core)

CentOS Linux release 7.5.1804 (Core)

 

ceph daemon mds.ceph-n1 perf dump mds :

 

 

"mds": {

"request": 21968558,

"reply": 21954801,

"reply_latency": {

"avgcount": 21954801,

"sum": 100879.560315258,

"avgtime": 0.004594874

},

"forward": 13627,

"dir_fetch": 3327,

"dir_commit": 162830,

"dir_split": 1,

"dir_merge": 0,

"inode_max": 2147483647,

"inodes": 68767,

"inodes_top": 4524,

"inodes_bottom": 56697,

"inodes_pin_tail": 7546,

"inodes_pinned": 62304,

"inodes_expired": 1640159,

"inodes_with_caps": 62192,

"caps": 114126,

"subtrees": 14,

"traverse": 38309963,

"traverse_hit": 37606227,

"traverse_forward": 12189,

"traverse_discover": 6634,

"traverse_dir_fetch": 1769,

"traverse_remote_ino": 6,

"traverse_lock": 7731,

"load_cent": 2196856701,

"q": 0,

"exported": 143,

"exported_inodes": 291372,

"imported": 125,

"imported_inodes": 176509

 

 

Thanks for your help.

 

Regards 

 

Marc-Antoine 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Unexpected data

2018-06-04 Thread Marc-Antoine Desrochers
Hi,

 

Im not sure if it's normal or not but each time I add a new osd with
ceph-deploy osd create --data /dev/sdg ceph-n1.

It add 1GB to my global data but I just format the drive so it's supposed to
be at 0 right ?

So I have 6 osd in my ceph and it took 6gib.

 

[root@ceph-n1 ~]# ceph -s

  cluster:

id: 1d97aa70-2029-463a-b6fa-20e98f3e21fb

health: HEALTH_OK

 

  services:

mon: 1 daemons, quorum ceph-n1

mgr: ceph-n1(active)

mds: cephfs-1/1/1 up  {0=ceph-n1=up:active}

osd: 6 osds: 6 up, 6 in

 

  data:

pools:   2 pools, 600 pgs

objects: 341 objects, 63109 kB

usage:   6324 MB used, 2782 GB / 2788 GB avail

pgs: 600 active+clean

 

 

So im kind of confused...

Thanks for your help.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Radosgw

2018-05-28 Thread Marc-Antoine Desrochers
Hi,  

 

Im new in a business and I took on the ceph project. 

Im still a newbie on that subject and I try to understand what the previous
guy was trying to do.

 

Is there any reason someone would install radosgw with a cephfs?

If not how can I remove all radosgw configuration without restarting from
scratch ?

 

 

-- 




 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Dependencies

2018-05-25 Thread Marc-Antoine Desrochers
Hi,

 

I want to know if there is any dependencies between the ceph admin node and
the other nodes ?

 

Can I delete my ceph admin node and create a new one and link it to my OSD's
nodes ?

 

Or can I take all my existing OSD's in a node from Cluster "A" and transfert
it to cluster "B" ?

 

 

Cluster A

 

AdminNode ---node1

node2

   node3(take that node and bring it to
cluster < B >)

 

Cluster B 

 

AdminNode node1

 -Node 2(clusterA-node3)

 

 

Cheers, Marc-Antoine

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] MDS_DAMAGE: 1 MDSs report damaged metadata

2018-05-23 Thread Marc-Antoine Desrochers
Dear Ceph Experts,

 

I have recently deleted a very big directory on my cephfs and a few minutes
after my dashboard start yelling : 

Overall status: HEALTH_ERR

MDS_DAMAGE: 1 MDSs report damaged metadata

 

So I immediately log in my ceph admin node than do a ceph -s:

cluster:

id: 472dfc88-84dc-4284-a1cf-0810ea45ae19

health: HEALTH_ERR

1 MDSs report damaged metadata

 

  services:

mon: 3 daemons, quorum ceph-n1,ceph-n2,ceph-n3

mgr: ceph-admin(active), standbys: ceph-n1

mds: cephfs-2/2/2 up  {0=ceph-admin=up:active,1=ceph-n1=up:active}, 1
up:standby

osd: 17 osds: 17 up, 17 in

rgw: 1 daemon active

 

  data:

pools:   9 pools, 1584 pgs

objects: 1093 objects, 418 MB

usage:   2765 MB used, 6797 GB / 6799 GB avail

pgs: 1584 active+clean

 

  io:

client:   35757 B/s rd, 0 B/s wr, 34 op/s rd, 23 op/s wr

 

and after a few research I tried : #ceph tell mds.0 damage ls : 

"damage_type": "backtrace",

"id": 2744661796,

"ino": 1099512314364,

"path": "/M3/sogetel.net/t/te/testmda3/Maildir/dovecot.index.log.2"

 

And so I tried to do what I saw at
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35682.html

But it did not work so now I don't know how to fix it.

 

Can you help me ?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com