Dear All,
Our system was recently upgraded to lustre-2.10.6. We are doing the
data migration from some almost full OSTs to a newly installed file
server. But we often encountered file system freezed for about 30 secs,
and then returned to normal (within 5 mins it may happen several times).
Our
This presentation from LUG 2017 might be useful for you:
http://cdn.opensfs.org/wp-content/uploads/2017/06/Wed06-CroweTom-lug17-ost_data_migration_using_ZFS.pdf
It shows how ZFS send/receive can be used to migrate data between OSTs. I
used it as a reference when I worked with another admin to
Hi Kurt,
Haven't got much experience with the complete send/receive to a remote
ZFS fs. However, I've created my own scripts for just sending to files.
Also, I've moved an MDT from ashift=12 to ashift=9 with send/recv. It
worked without any problems.
ZFS above 0.7.9 have issues together with
I have been playing a little bit with DNE today, and I had a question about
some odd behavior I saw regarding inode counts. My Lustre 2.10.6 file system
has 2 MDTs. I created a directory (which by default resides on MDT0) and then
created 10 files in that directory:
[root@sip-mgmt2 test]#
Hmm. I think because users can append to any file at any time, and also append
to a file then write to it normally, we might override the users preferred
layout for a file where appending is just a small part of the plan. (And of
course since we can’t control when users do append, it doesn’t
Dear All,
This is a following up of migrating data out of an OST issue.
Two weeks ago we have upgraded our Lustre file system to version
2.10.6 (the OSTs are based on ldiskfs). Since there are many OSTs
containing data over 99% of their capacity, we plan to migrate
parts of their data to free
Another thought I just had while re-reading LU-9341 is whether it would be
better to have the MDS always create files opened with O_APPEND with
stripe_count=1? There is no write parallelism for O_APPEND files, so having
multiple stripes doesn't help the writer. Because the writer
Thomas,
there _are_ potential use cases for for having incomplete layouts now and in
the future:
- limiting the size of files (e.g. logs), so that they don't exceed a set limit
- partial HSM restore or FLR mirrors for very large files on fast/local storage
The append problem is something we are