Hi Andreas. The latest that I have seen in the lustre repository:
1.42.13.wc4-7 Regards ============================================= Fernando Pérez Institut de Ciències del Mar (CMIMA-CSIC) Departament Oceanografía Física i Tecnològica Passeig Marítim de la Barceloneta,37-49 08003 Barcelona Phone: (+34) 93 230 96 35 ============================================= > El 7 may 2016, a las 0:02, Dilger, Andreas <[email protected]> > escribió: > > On 2016/05/06, 15:48, "lustre-discuss on behalf of Fernando Pérez" > <[email protected] > <mailto:[email protected]> on behalf of > [email protected] <mailto:[email protected]>> > wrote: > >> Thank you Mark. >> >> Finally I have killed the e2fsck. After restart again our lustre >> filesystem it seems all works OK. >> >> We are using two 300 GB RAID 1 10K SAS drives for the combined mdt / mgs. >> >> I tried to run the e2fsck -fy because the -fn finish in 2 hoursŠI think >> there is a problem in the latest e2fsprogs because the e2fsck returned >> that it was repairing more inodes than our filesystem has. > > Which specific version of e2fsprogs are you using? > > Cheers, Andreas > >> >> Regards. >> ============================================= >> Fernando Pérez >> Institut de Ciències del Mar (CMIMA-CSIC) >> Departament Oceanografía Física i Tecnològica >> Passeig Marítim de la Barceloneta,37-49 >> 08003 Barcelona >> Phone: (+34) 93 230 96 35 >> ============================================= >> >>> El 6 may 2016, a las 17:57, Mark Hahn <[email protected]> escribió: >>> >>>> More information about our lustre system: combined mds / mdt has 189 >>>> GB and >>>> 8.9 GB used. It was formatted with the default options. >>> >>> fsck time is more about the number of files (inodes), rather than >>> the size. but either you have quite slow storage, or something is >>> wrong. >>> >>> as a comparison point, I can do a full/force fsck on one of our MDS/MDT >>> that has 143G or 3.3T in use (313M inodes) in about 2 hours. it is a MD >>> raid10 on 16x 10K SAS drives, admittedly. >>> >>> if your hardware is conventional (locally-attached multi-disk RAID), >>> it might make sense to look at its configuration. for instance, fsck >>> is largely seek-limited, but doing too much readahead, or using large >>> RAID block sizes (for R5/6) can be disadvantageous. having plenty of >>> RAM helps in some phases. >>> >>> regards, mark hahn. >> >> _______________________________________________ >> lustre-discuss mailing list >> [email protected] <mailto:[email protected]> >> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org >> <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org> >> > > > Cheers, Andreas > -- > Andreas Dilger > > Lustre Principal Architect > Intel High Performance Data Division
_______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
