The filesystem I'm working with has about 100M files and 80Tb of data. What kind of metadata latency do you observe? I did a mmdiag --iohist and filtered out all of the md devices and averaged over reads and writes. I'm seeing ~.28ms on a one off dump. The pure array which we have is 10G iscsi connected and is reporting average .25ms.
On Wed, Mar 22, 2017 at 6:47 AM, Sobey, Richard A <[email protected]> wrote: > We’re also snapshotting 4 times a day. Filesystem isn’t tremendously busy > at all but we’re creating snaps for each fileset. > > > > [root@cesnode tmp]# mmlssnapshot gpfs | wc -l > > 6916 > > > > *From:* [email protected] [mailto:gpfsug-discuss- > [email protected]] *On Behalf Of *J. Eric Wonderley > *Sent:* 20 March 2017 14:03 > *To:* gpfsug main discussion list <[email protected]> > *Subject:* [gpfsug-discuss] snapshots & tiering in a busy filesystem > > > > I found this link and it didn't give me much hope for doing snapshots & > backup in a home(busy) filesystem: > > http://www.spectrumscale.org/pipermail/gpfsug-discuss/2013- > February/000200.html > > I realize this is dated and I wondered if qos etc have made is a tolerable > thing to do now. Gpfs I think was barely above v3.5 in mid 2013. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
