Hi All,

I am seeing an issue with ceph performance.
Starting from an empty cluster of 5 nodes, ~600Tb of storage.

monitoring disk usage in nmon I see rolling 100% usage of a disk.
Ceph -w doesn’t report any spikes in throughput and the application putting 
data is not spiking in the load generated.

│sdg2       0%    0.0  537.5|                    >                            | 
                                                                                
                                                                                
                               │
│sdh        2%    4.0 4439.8|RW                                               > 
                                                                                
                                                                                
                               │
│sdh1       2%    4.0 3972.3|RW                                               > 
                                                                                
                                                                                
                               │
│sdh2       0%    0.0  467.6|                  >                              | 
                                                                                
                                                                                
                               │
│sdj        3%    2.0 3524.7|RW                                               > 
                                                                                
                                                                                
                               │
│sdj1       3%    2.0 3488.7|RW                                               > 
                                                                                
                                                                                
                               │
│sdj2       0%    0.0   36.0|                          >                      | 
                                                                                
                                                                                
                               │
│sdk       99% 1144.9 3564.6|RRRRRRRRRRRRRWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW> 
                                                                                
                                                                                
                               │
│sdk1      99% 1144.9 3254.9|RRRRRRRRRRRRRWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW> 
                                                                                
                                                                                
                               │
│sdk2       0%    0.0  309.7|W            >                                   | 
                                                                                
                                                                                
                               │
│sdl        1%    4.0  955.1|R                          >                     | 
                                                                                
                                                                                
                               │
│sdl1       1%    4.0  791.3|R                          >                     | 
                                                                                
                                                                                
                               │
│sdl2       0%    0.0  163.8|                    >                            |


Is this anything to do with the way objects are stored on the file system?
I remember reading that as the number of objects grow the files on disk are 
re-orginised?

This issue for obvious reasons causes a large degradation in performance, is 
there a way of mitigating it?
Will this go away as my cluster reaches a higher level of disk utilisation?


Kind Regards,
Bryn Mathias

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to