ZFS on Linux has been added to SL 6.4 Addons.

http://ftp.scientificlinux.org/linux/scientific/6x/addons/x86_64/zfs/

http://ftp.scientificlinux.org/linux/scientific/6x/addons/x86_64/repoview/

Enjoy and community feedback is welcomed!

*Release Explanation*
ZFS on Linux (ZOL) is currently manually added to HELiOS (Healthcare Enterprise 
Linux Operating System).
HELiOS is a spin of Scientific Linux created and maintained by the GE 
Healthcare Compute Systems team (CST).
HELiOS strives to be as upstream as possible thus including ZOL into SL helps 
us better maintain upstream purity in HELiOS.
Including ZOL in SL also allows the rest of the SL community to benefit from 
our work with ZOL.

*Release Notes*
Core ZFS development started in 2001 with ZFS being Officially released by Sun 
in 2004.
Testing and evaluation of ZOL by CST has shown better performance, scalability, 
and stability, then BTRFS.
As a result of the maturity of core ZFS, ZOL inherits ZFS features which are 
many years ahead of BTRFS.
Testing by CST with ZOL on a proper hardware setup with proper SSD ZIL/L2ARC 
devices has shown ZOL to yield better performance then both BTRFS and native 
Solaris ZFS.
Also performance tests of ZOL by CST has yielded better performance results 
then running the equivalent  tests on Sun/Oracle or Nexenta ZFS storage 
appliances.
Additional testing by CST of ZOL has shown that when combined with GlusterFS 
the results are a very powerful, redundant and almost infinitely scalable 
storage solution.

ZOL is the work of Lawrence Livermore National Laboratory (LLNL) under Contract 
No. DE-AC52-07NA27344 (Contract 44) between the U.S. Department of Energy (DOE) 
and Lawrence Livermore National Security, LLC (LLNS) for the operation of LLNL.


>> FAQ for Questions, Comments, and concerns with ZOL in SL <<



*QuickLinks*

- Main ZFS on Linux Website: http://zfsonlinux.org/

- ZOL FAQ: http://zfsonlinux.org/faq.html

- SPL Source Repository: https://github.com/zfsonlinux/spl

- ZFS Source Repository: https://github.com/zfsonlinux/zfs

- ZFS Announce: 
https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups#!forum/zfs-announce

- ZFS_Best_Practices_Guide (written for Solaris but most things still apply): 
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

- ZFS_Evil_Tuning_Guide (written for Solaris but most things still apply): 
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide



For Providing feedback or general use and installation questions use the 
following mailing lists

  --> scientific-linux-users

  Or

  --> Existing zfs-discuss 
(https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups#!forum/zfs-discuss)



- For developmental type questions use the following existing ZOL mailing list:

  --> 
https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups#!forum/zfs-devel



- To report SPL bugs:

  --> https://github.com/zfsonlinux/spl/issues



- To report ZFS bugs:

  --> https://github.com/zfsonlinux/zfs/issues



*Some general best practices to note with ZOL*

- SSD ZIL and L2ARC devices help performance in general

- Always try and use low latency SSD devices!

- Multipath your disks using multiple HBA whenever possible

- Something to note in ZOL was this change in rc12:

--> https://github.com/zfsonlinux/zfs/commit/920dd524fb2997225d4b1ac180b

--> cbc14b045fda6

--> Translation: How ZOL better handles and avoids the situation with

--> the BSD and native Solaris ZFS described here:

--> http://www.nex7.com/node/12

- Try to limit pools to no more than 48 disks

- If using ZOL to store KVM Virtual Machine images I have found the following 
setup yields the best performance:

--> Using ZVOL's formatted with ext4 with a larger blocksize yield the best 
performance when combined with KVM and RAW thick or thin provisioned file 
backed disks.

--> Create a zvol as follows (Example): zfs create -V 100G -o

--> volblocksize=64K das0/foo After that a simple mkfs.ext4 -L

--> <zvolname> /dev/das0/foo mount command (Example) mount /dev/das0/foo

--> /some/mount/point -o noatime /some/mount/point is exported via NFS

--> v3 Feel free to enable NFS async for additional performance however

--> with the understanding on the implications of doing so Additionally the 
qemu/kvm VM disk cache policy set to none with the IO policy set to threaded.



*ZFS Examples*

# Limit ZFS ARC else it defaults to total system ram size minus 1GB # Example 
64GB # Create and add the following to /etc/modprobe.d/zfs.conf --> options zfs 
zfs_arc_max=68719476736



ZFS Zpool Creation Commands (examples are using dm-multipath disks) # EX: 
Create a 24 Disk Raidz2 (Raid6) pool zpool create das0 raidz2 mpathb mpatha 
mpathc mpathd mpathe mpathf mpathg mpathh mpathi mpathj mpathk mpathl mpathm 
mpathn mpatho mpathp mpathq mpathr mpaths mpatht mpathu mpathv mpathw mpathx



# EX: Create a 24 Disk Striped Mirrors (Raid10) pool zpool create das0 mirror 
mpathb mpatha mirror mpathc mpathd mirror mpathe mpathf mirror mpathg mpathh 
mirror mpathi mpathj mirror mpathk mpathl mirror mpathm mpathn mirror mpatho 
mpathp mirror mpathq mpathr mirror mpaths mpatht mirror mpathu mpathv mirror 
mpathw mpathx



# EX: Create a 24 Disk Striped Mirrors (Raid10) pool with ashift option # Also 
set ashift=12 this is required when dealing with Advanced Format drives # Using 
it with Non AF drives can give a performance boost with some workloads # Using 
it does decrease overall pool capacity zpool create -o ashift=12 das0 mirror 
mpathb mpatha mirror mpathc mpathd mirror mpathe mpathf mirror mpathg mpathh 
mirror mpathi mpathj mirror mpathk mpathl mirror mpathm mpathn mirror mpatho 
mpathp mirror mpathq mpathr mirror mpaths mpatht mirror mpathu mpathv mirror 
mpathw mpathx



Zpool autoexpand option

# Can be specified at pool creation time # Can be set at anytime # Needed if 
you want to add drives to an existing pool # At pool creation time: -o 
expand=on # After pool creation: zpool set expand=on



Add two SSD striped together as Read cache (l2arc) to a pool zpool add das0 
cache /dev/<disk-by-path> /dev/<disk-by-path>



Add two SSD striped together as Write cache ZFS intent Log (ZIL) to a pool Add 
striped zil devices:  zpool add das0 log /dev/<disk-by-path> /dev/<disk-by-path>



Create a ZFS Filesystem

# EX: create ZFS filesystem named foo in pool das0 zfs create das0/foo



Create a ZFS zvol

# EX: create zvol named foov in pool das0 zfs create -V 100G das0/foo



Create sparse ZFS zvol with custom blocksize zfs create -s -V 500G -o 
volblocksize=64K das0/foo



Grow a ZFS zvol

# EX: grow foov on pool das0 to 500G from 100G # Note if zvol is formatted with 
and FS use that FS tool to grow the FS after resize # EX: resize2fs (ext4/ext3) 
xfs_growfs (xfs) zfs set volsize=500G das0/foov



Export and Import a zpool

# EX: zpool created from devices listed in /dev/mapper and /dev/disk/by-id 
zpool export zpool import -f -d /dev/mapper -d /dev/disk/by-id -a

regards,
Chris Brown
GE Healthcare Technologies
Compute Systems Architect


Reply via email to