http://eyck.forumakad.pl/~eyck/log/Talk/Why.XFS.iztsu.html?seemore=y

Why XFS

Motto: quoting xfs whitepaper from 1996 usenix conference: With today's 9 gigabyte disk drives it only takes 112 disk drives to surpass 1 terabyte of storage capacity (emphasis mine). http://oss.sgi.com/projects/xfs/papers/xfs_usenix/index.html

XFS: eXtended File System (SGI, Unix, Windows)

What are 'journaled filesystems': filesystems with journal.

    Good:
  1. Old and very well-tested: shipping since 1994
  2. Very good performance on large IOS
  3. Fully 64bit filesystem, has no problem with large files and discs, very well tested ( literaly decades ) with terabyte-range files and filesystems.
  4. Security: damaged files gets zeroed...
  5. Native quota
  6. Native acl
  7. Native EAs
    Bad:
  1. Relatively new to linux. But still - oldest journaling filesystem available on Linux
  2. Slow on small files ( with 1-2k sized files even ext2 is way faster)
  3. Convenience: damaged files gets zeroed... supposedly this is the issue with all journaling filesystems
  4. Not very popular - issues with exotic software: quotes inside virtual servers, etc..
  5. Not very simple codebase, stable and well-tested, but very big and relatively intrusive. (non-issue on 2.6, where lots of needed support was moved to lower layers, things like variable-size IO requests, delayed allocation )
  6. Takes few % more of diskspace then others, it bit me when I tried to convert partition that was 99% full...
  7. On related note - xfs is very slow when filesystem is 99.x% full,

  8. No data="">
  9. You can't use it in raid1 both driectly on hardware and via md1

    Features:
  • Large file & filesystem support: 2^63-1 byte files supported.
  • Extensive use of B+trees
  • Scalable&multi-threaded - scales very well with added spindles and CPUs ( working instalations with up to 512cpus... ), mainly thanks to Allocation Groups.
  • Variable block size: 512bytes to 64kbytes
  • Realtime data subvolume
    • Data ( metadata+files data)
    • Log
    • Realtime
      Usefull utilities:
    • xfsdump,xfsrestore. (this is my personal top feature;)
    • xfs_growfs (fast and reliable. similiar utilities exist for ext2&stuff, but they seem kludgy, and are not supported by ext2/3 developers)
    • mkfs.xfs - very fast (this is surprisingly important when you get to terabyte range)

    History: 1993: Berkeley FFS - state of the art at the time. Irix uses EFS - extents-based FFS problems to solve Linux had the same problems to fix in ~1999:

    • Filesize limit(2 GB)
    • Filesystem limit(8GB)
    • Statically allocated metadata
    • Long recovery times
    • Slow operation on big directories
    • Lack of extended attributes
    • Problems with media streaming
    • General problems with IO speed, high-end hardware became so fast at IO that FFS couldn't keep up. Stuff like late allocation helped with that

    Filesystems generations:

    1. minix fs/ sys v
    2. ffs/ext2
    3. jfs/xfs/reiserfs
      • journaling(or equivalent)
      • flexible metadata structures
      • dynamic inode allocation
      • extents
      Things XFS brought to linux
    • XFS - the filesystem;)
    • DirectIO ( btw, besides normal uses for DirectIO, it can be used to bypass 16Terabytes filesize limit that exists on linux systems, with DirectIO it's possible to use 8388608TB files ;)

    How to install debian on XFS? http://people.debian.org/~blade/XFS-Install/ - woody cds...


    Generic: http://www.usenix.org/publications/library/proceedings/usenix2000/general/full_papers/seltzer/seltzer_html/index.html - Journaling vs Soft Updates http://www.mckusick.com/softdep/index.html

Reply via email to