sitting here in the US GPFS UG meeting in NYC and just found out about this 
list.

We've been a GPFS user for many years, first with integrated DDN support, but 
now also with a GSS system. we have about 4PB of raw GPFS storage and 1 billion 
inodes. We keep our metadata on TMS ramsan for very fast policy execution for 
tiering and migration.

We use GPFS to hold the primary source data from our custom supercomputers. We 
have many policies executed periodically for managing the data, including 
writing certain files to dedicated fast pools and then migrating the data off 
to wide swaths of disk for read access from cluster clients.

One pain point, which I'm sure many of the rest of you have seen, restripe 
operations for just metadata are unnecessarily slow. If we experience a flash 
module failure and need to restripe, it also has to check all of the data. I 
have a feature request open to make metadata restripes only look at metadata 
(since it is on RamSan/FlashCache, this should be very fast) instead of 
scanning everything, which can and does take months with performance impacts.

Doug Hughes
D. E. Shaw Research, LLC.

Sent from my android device.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to