Hi Tobias,

I had a simmilar problem with supermicro and this HBA: 
https://storage.microsemi.com/en-us/support/sas/sas/aha-1000-8i8e/
The problem was because of incompatibility of the aacraid module/driver and 
CentOS 7.3.
I had to go to CentOS 7.2 with kernel 3.10.0-327.el7.x86_64 as the driver was 
not supporting a newer kernel until today.

[root@agpceph01 ~]# modinfo aacraid
filename:       /lib/modules/3.10.0-327.el7.x86_64/extra/aacraid/aacraid.ko
version:        1.2.1.53005
license:        GPL
description:    Dell PERC2, 2/Si, 3/Si, 3/Di, Adaptec Advanced Raid Products, 
HP NetRAID-4M, IBM ServeRAID & ICP SCSI driver
author:         Red Hat Inc and Adaptec
rhelversion:    7.2

That fixed my latency problems. Did you check the compatibility of your kernel 
and drivers ?

Best regards,
Sven

Von: ceph-users [mailto:[email protected]] Im Auftrag von 
Tobias Kropf - inett GmbH
Gesendet: Freitag, 21. April 2017 13:41
An: [email protected]
Betreff: [ceph-users] Ceph Latency

Hi all,

we have a running Ceph Cluster over 5 OSD nodes. Performance and Latency are 
good. Now we have two new supermicro OSD nodes with HBA. The osd 0-26 are in 
the old Servers und osd 27-55 in the new. Is this latency normal? The osd 27-55 
are not in bucket und mapped no pools.

osd fs_commit_latency(ms) fs_apply_latency(ms)
  0                     0                    0
  1                     0                    0
  2                     0                    0
  3                     0                    0
  4                     0                    0
  5                     0                    0
  6                     0                    0
  7                     0                    0
  8                     0                    0
  9                     0                    0
10                     0                    0
11                     0                    0
12                     0                    0
13                     0                    0
14                     0                    0
15                     0                    0
16                     0                    0
17                     0                    0
18                     0                    0
19                     0                    0
20                     0                    0
21                     0                    1
22                     0                    0
23                     0                    1
24                     0                    0
25                     0                    0
26                     0                    0
27                    32                   39
28                     0                    8
29                    42                   49
30                     0                   12
31                    52                   60
32                     0                   15
33                     0                    0
34                    18                  108
35                    10                   13
36                    19                  101
37                    14                   99
38                    17                  126
39                    14                   16
40                    19                   64
41                    12                   24
42                    28                  121
43                    16                   25
44                    18                  221
45                    11                   21
46                    35                  134
47                     7                   12
48                    42                  138
49                    11                   17
50                    40                  131
51                    15                   22
52                    39                  137
53                    14                   23
54                    36                  231
55                    12                   16

Tobias



_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to