[My previous message was somewhat garbled when reflected back at me.  It looks 
better in the archives here: 
https://marc.info/?l=openbsd-misc&m=161902769301731&w=2.  I’m resending as 
plain-text to see if the problem is on my end.]


I’m running OpenBSD on top of bHyve using virtual disks allocated out of ZFS 
pools.  While not the same setup, some concepts carry over...

I have two types of pools:

  1) an “expensive" pool for fast random IO:
        - this pool is made up stripes of SSD-based vdevs.
        - ZFS is configured to use a 16K recordsize for this pool.
        - good for small files (guest OS, DBs, web/mail/dns files, etc.)
        - When ZFS is told to use the SSD, it starts the partition
           on sector 256 (not the default sector 34) to ensure good
           SSD NAND alignment.

  2) a less-expensive pool for large sequential IO:
        - this pool is a single RAIDZ2-based vdev using spinning rust.
        - ZFS is configured to use a 1M recordsize for this pool.
        - good for large files (movies, high-res images, backups, etc.)

Virtual disks are exposed to the OpenBSD guests from both pools.  The guest’s 
root-disk is always allocated from pool #1.  Typically, a second 
application-specific disk is also allocated from pool #1 (e.g., /var/www/sites 
on a web server, /home on a mail server, etc.).  Only in special circumstances 
(e.g., a media server) is a disk allocated from pool #2. 

This arrangement steps around needing to read/write 1M blocks for each small 
file access, and also the possibility that a guest accessing a given block will 
span more than a single physical block.

Can VMWare virtual disks be configured similarly?

K.


Reply via email to