I'm testing an idle ceph cluster.
my pgmap version is always increasing, is this normal ?

2014-04-30 17:20:41.934127 mon.0 [INF] pgmap v281: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:42.962033 mon.0 [INF] pgmap v282: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:35.373060 osd.4 [INF] 0.179 scrub ok
2014-04-30 17:20:37.373338 osd.4 [INF] 0.7a scrub ok
2014-04-30 17:20:38.373606 osd.4 [INF] 0.1ba scrub ok
2014-04-30 17:20:43.990160 mon.0 [INF] pgmap v283: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:46.361545 mon.0 [INF] pgmap v284: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:48.438894 mon.0 [INF] pgmap v285: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:44.297707 osd.2 [INF] 2.26 scrub ok
2014-04-30 17:20:46.297851 osd.2 [INF] 2.27 scrub ok
2014-04-30 17:20:48.298423 osd.2 [INF] 2.29 scrub ok
2014-04-30 17:20:51.931978 mon.0 [INF] pgmap v286: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:46.374796 osd.4 [INF] 0.3e scrub ok
2014-04-30 17:20:48.375078 osd.4 [INF] 1.2 scrub ok
2014-04-30 17:20:50.375458 osd.4 [INF] 1.3d scrub ok
2014-04-30 17:20:51.375821 osd.4 [INF] 2.1 scrub ok
2014-04-30 17:20:52.376033 osd.4 [INF] 2.3c scrub ok
2014-04-30 17:20:53.954350 mon.0 [INF] pgmap v287: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:56.364735 mon.0 [INF] pgmap v288: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:53.299142 osd.2 [INF] 2.2c scrub ok
2014-04-30 17:20:58.299835 osd.2 [INF] 2.3d scrub ok
2014-04-30 17:21:01.932738 mon.0 [INF] pgmap v289: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail



cluster doesn nothing at this time.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to