Re: ZFS mirror install /mnt is empty
On 13-05-13 07:58, Trond Endrestøl wrote: On Sun, 12 May 2013 23:11+0200, Roland van Laar wrote: Hello, I followed these[1] step up to the Finishing touches. I'm using a 9.1 Release. After the install I go into the shell and /mnt is empty. The mount command shows that the zfs partitions are mounted. When I reboot the system it can't find the bootloader. What can I do to fix this? Thanks, Roland van Laar [1] https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE Looking through the wiki notes I would do a couple of things in a different way. Since you're running 9.1-RELEASE you should take into account the need for the /boot/zfs/zpool.cache file until 9.2-RELEASE exist or you switch to the latest 9-STABLE. Create your zpool using a command like this one: zpool create -o cachefile=/tmp/zpool.cache -m /tmp/zroot zroot /dev/gpt/disk0 Copy the /tmp/zpool.cache file to /tmp/zroot/boot/zfs/zpool.cache, or in your case to /mnt/boot/zfs/zpool.cache after extracting the base and kernel stuff. In the wiki section Finishing touches, perform step 4 before step 3. The final command missing in step 3 should be zfs unmount -a once more. Avoid step 5 at all cost! Maybe this recipe is easier to follow, it sure works for 9.0-RELEASE and 9.1-RELEASE, I only hope you're happy typing long commands, and yes, command line editing is available in the shell: https://ximalas.info/2011/10/17/zfs-root-fs-on-freebsd-9-0/ Thank you for that link. This worked (better). I'm getting into the 'mountroot' shell during the boot. Oh well, I'm getting better at this. The ZFS guides on the wiki leave you with a empty root zfs filesystem after the installation. After I know a bit more about ZFS and why the FreeBSD wiki is wrong on ZFS installation I hope to edit them. Thank you all for your answers, Regards, Roland van Laar ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: ZFS mirror install /mnt is empty
On May 14, 2013, at 12:10 AM, Shane Ambler free...@shaneware.biz wrote: When it comes to disk compression I think people overlook the fact that it can impact on more than one level. Compression has effects at multiple levels: 1) CPU resources to compress (and decompress) the data 2) Disk space used 3) I/O to/from disks The size of disks these days means that compression doesn't make a big difference to storage capacity for most people and 4k blocks mean little change in final disk space used. The 4K block issue is *huge* if the majority of your data is less than 4K files. It is also large when you consider that a 5K file will not occupy 8K on disk. I am not a UFS on FreeBSD expert, but UFS on Solaris uses a default block size of 4K but has a fragment size of 1K. So files are stored on disk with 1K resolution (so to speak). By going to a 4K minimum block size you are forcing all data up to the next 4K boundary. Now, if the majority of your data is in large files (1MB or more), then the 4K minimum black size probably gets lost in the noise. The other factor is the actual compressibility of the data. Most media files (JPEG, MPEG, GIF, PNG, MP3, AAC, etc.) are already compressed and trying to compress them again is not likely to garner any real reduction inn size. In my experience with the default compression algorithm (lzjb), even uncompressed audio files (.AIFF or .WAV) do not compress enough to make the CPU overhead worthwhile. One thing people seem to miss is the fact that compressed files are going to reduce the amount of data sent through the bottle neck that is the wire between motherboard and drive. While a 3k file compressed to 1k still uses a 4k block on disk it does (should) reduce the true data transferred to disk. Given a 9.1 source tree using 865M, if it compresses to 400M then it is going to reduce the time to read the entire tree during compilation. This would impact a 32 thread build more than a 4 thread build. If the data does not compress well, then you get hit with the CPU overhead of compression to no bandwidth or space benefit. How compressible is the source tree ? [Not a loaded question, I haven't tried to compress it] While it is said that compression adds little overhead, time wise, Compression most certainly DOES add overhead in terms of time, based on the speed of your CPU and how busy your system is. My home server is an HP Proliant Micro with a dual core AMD N36 running at 1.3 GHz. Turning on compression hurts performance *if* I am getting less than 1.2:1 compression ratio (5 drive RAIDz2 of 1TB Enterprise disks). Above that the I/O bandwidth reduction due to the compression makes up for the lost CPU cycles. I have managed servers where each case prevailed… CPU limited so compression hurt performance and I/O limited where compression helped performance. it is going to take time to compress the data which is going to increase latency. Going from a 6ms platter disk latency to a 0.2ms SSD latency gives a noticeable improvement to responsiveness. Adding compression is going to bring that back up - possibly higher than 6ms. Interesting point. I am not sure of the data flow through the code to know if compression has a defined latency component, or is just throughput limited by CPU cycles to do the compression. Together these two factors may level out the total time to read a file. One question there is whether the zfs cache uses compressed file data therefore keeping the latency while eliminating the bandwidth. Data cached in the ZFS ARC or L2ARC is uncompressed. Data sent via zfs send / zfs receive is uncompressed; there had been talk of an option to send / receive compressed data, but I do not think it has gone anywhere. Personally I have compression turned off (desktop). My thought is that the latency added for compression would negate the bandwidth savings. For a file server I would consider turning it on as network overhead is going to hide the latency. Once again, it all depends on the compressibility of the data, the available CPU resources, the speed of the CPU resources, and the I/O bandwidth to/from the drives. Note also that RAIDz (RAIDz2, RAIDz3) have their own computational overhead, so compression may be a bigger advantage in this case than in the case of a mirror, as the RAID code will have less data to process after being compressed. -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: ZFS mirror install /mnt is empty
On May 13, 2013, at 1:58 AM, Trond Endrestøl trond.endres...@fagskolen.gjovik.no wrote: Due to advances in hard drive technology, for the worse I'm afraid, i.e. 4K disk blocks, I wouldn't bother enabling compression on any ZFS file systems. I might change my blog posts to reflect this stop gap. If you do happen to have 4K drives, you might want to check out this blog post: https://ximalas.info/2012/01/11/new-server-and-first-attempt-at-running-freebsdamd64-with-zfs-for-all-storage/ I did look, it doesn't explain why not to enable compression on 4k sector drives. From discussion on the zfs-discuss lists (both the old one from OpenSolaris and the new one at Illumos) the only issue with 4K sector drives is mixing 0.5K sector and 4K sector drives. You can tunes the zpool offset to handle 4K sector drives just fine, but it is a pool wide tuning. http://zfsday.com/wp-content/uploads/2012/08/Why-4k_.pdf has some 4K background, and the only mention I see of compression and 4K is that you may get less. But… you really need to test your data to see if turning compression on is beneficial with any dataset. There is noticeable computational overhead to enabling compression. If you are CPU bound, then you will get better performance with compression off. If you are limited by the I/O bandwidth to your drives, then *if* your data is highly compressible, then you will get better performance with compression on. I have managed large pools of both data that compresses well and data that does not. http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks discusses the issue and presents solutions using Illumos. I could find no such examples for FreeBSD, but I'm sure some of the same techniques would work (manually setting the ashift to 12 for 4K disks). -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: ZFS mirror install /mnt is empty
On Mon, 13 May 2013 08:40-0400, Paul Kraus wrote: On May 13, 2013, at 1:58 AM, Trond Endrestøl trond.endres...@fagskolen.gjovik.no wrote: Due to advances in hard drive technology, for the worse I'm afraid, i.e. 4K disk blocks, I wouldn't bother enabling compression on any ZFS file systems. I might change my blog posts to reflect this stop gap. If you do happen to have 4K drives, you might want to check out this blog post: https://ximalas.info/2012/01/11/new-server-and-first-attempt-at-running-freebsdamd64-with-zfs-for-all-storage/ I did look, it doesn't explain why not to enable compression on 4k sector drives. I guess it's due to my (mis)understanding that files shorter than 4KB stored on 4K drives never will be subject to compression. And as you state below, the degree of compression depends largely on the data at hand. From discussion on the zfs-discuss lists (both the old one from OpenSolaris and the new one at Illumos) the only issue with 4K sector drives is mixing 0.5K sector and 4K sector drives. You can tunes the zpool offset to handle 4K sector drives just fine, but it is a pool wide tuning. http://zfsday.com/wp-content/uploads/2012/08/Why-4k_.pdf has some 4K background, and the only mention I see of compression and 4K is that you may get less. But? you really need to test your data to see if turning compression on is beneficial with any dataset. There is noticeable computational overhead to enabling compression. If you are CPU bound, then you will get better performance with compression off. If you are limited by the I/O bandwidth to your drives, then *if* your data is highly compressible, then you will get better performance with compression on. I have managed large pools of both data that compresses well and data that does not. http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks discusses the issue and presents solutions using Illumos. I could find no such examples for FreeBSD, but I'm sure some of the same techniques would work (manually setting the ashift to 12 for 4K disks). -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company -- +---++ | Vennlig hilsen, | Best regards, | | Trond Endrestøl, | Trond Endrestøl, | | IT-ansvarlig, | System administrator, | | Fagskolen Innlandet, | Gjøvik Technical College, Norway, | | tlf. mob. 952 62 567, | Cellular...: +47 952 62 567, | | sentralbord 61 14 54 00. | Switchboard: +47 61 14 54 00. | +---++___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: ZFS mirror install /mnt is empty
On May 13, 2013, at 9:25 AM, Trond Endrestøl trond.endres...@fagskolen.gjovik.no wrote: I guess it's due to my (mis)understanding that files shorter than 4KB stored on 4K drives never will be subject to compression. And as you state below, the degree of compression depends largely on the data at hand. Not a misunderstanding at all. With a 4K minimum block size (which is what a 4K sector size implies), a file less than 4KB will not compress at all. While ZFS does have a variable block size (512B to 128KB), with a 4K minimum black size (just like with any fixed block FS with a 4KB block size), small files take up more pace than they should (a 1KB file takes up an entire 4KB block). This ends up being an artifact of the block size and not ZFS, any FS on a 4K sector drive will have similar behavior. I leave compression off on most of my datasets, only turning it on on ones where I see a real benefit. /var compresses vert well (I turn off compression in /etc/newsyslog.conf and let ZFS compress even the current logs :-), I find that some VM's compress very well, media files do NOT compress very well (they tend to already be compressed), generic data compresses well, as do scanned documents (uncompressed PDFs). Your individual results will vary :-) Also remember, if you start with compression on and after a while you are not seeing good compression ratios, go ahead and turn it off. The already written data will remain compressed but new writes will not be. -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: ZFS mirror install /mnt is empty
On Mon, 13 May 2013 08:40-0400, Paul Kraus wrote: On May 13, 2013, at 1:58 AM, Trond Endrestøl wrote: Due to advances in hard drive technology, for the worse I'm afraid, i.e. 4K disk blocks, I wouldn't bother enabling compression on any ZFS file systems. I might change my blog posts to reflect this stop gap. I guess it's due to my (mis)understanding that files shorter than 4KB stored on 4K drives never will be subject to compression. And as you state below, the degree of compression depends largely on the data at hand. I don't want to start a big discussion but want to express an opinion that others may think about. When it comes to disk compression I think people overlook the fact that it can impact on more than one level. The size of disks these days means that compression doesn't make a big difference to storage capacity for most people and 4k blocks mean little change in final disk space used. One thing people seem to miss is the fact that compressed files are going to reduce the amount of data sent through the bottle neck that is the wire between motherboard and drive. While a 3k file compressed to 1k still uses a 4k block on disk it does (should) reduce the true data transferred to disk. Given a 9.1 source tree using 865M, if it compresses to 400M then it is going to reduce the time to read the entire tree during compilation. This would impact a 32 thread build more than a 4 thread build. While it is said that compression adds little overhead, time wise, it is going to take time to compress the data which is going to increase latency. Going from a 6ms platter disk latency to a 0.2ms SSD latency gives a noticeable improvement to responsiveness. Adding compression is going to bring that back up - possibly higher than 6ms. Together these two factors may level out the total time to read a file. One question there is whether the zfs cache uses compressed file data therefore keeping the latency while eliminating the bandwidth. Personally I have compression turned off (desktop). My thought is that the latency added for compression would negate the bandwidth savings. For a file server I would consider turning it on as network overhead is going to hide the latency. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: ZFS mirror install /mnt is empty
On Sun, 12 May 2013 23:11+0200, Roland van Laar wrote: Hello, I followed these[1] step up to the Finishing touches. I'm using a 9.1 Release. After the install I go into the shell and /mnt is empty. The mount command shows that the zfs partitions are mounted. When I reboot the system it can't find the bootloader. What can I do to fix this? Thanks, Roland van Laar [1] https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE Looking through the wiki notes I would do a couple of things in a different way. Since you're running 9.1-RELEASE you should take into account the need for the /boot/zfs/zpool.cache file until 9.2-RELEASE exist or you switch to the latest 9-STABLE. Create your zpool using a command like this one: zpool create -o cachefile=/tmp/zpool.cache -m /tmp/zroot zroot /dev/gpt/disk0 Copy the /tmp/zpool.cache file to /tmp/zroot/boot/zfs/zpool.cache, or in your case to /mnt/boot/zfs/zpool.cache after extracting the base and kernel stuff. In the wiki section Finishing touches, perform step 4 before step 3. The final command missing in step 3 should be zfs unmount -a once more. Avoid step 5 at all cost! Maybe this recipe is easier to follow, it sure works for 9.0-RELEASE and 9.1-RELEASE, I only hope you're happy typing long commands, and yes, command line editing is available in the shell: https://ximalas.info/2011/10/17/zfs-root-fs-on-freebsd-9-0/ Due to advances in hard drive technology, for the worse I'm afraid, i.e. 4K disk blocks, I wouldn't bother enabling compression on any ZFS file systems. I might change my blog posts to reflect this stop gap. If you do happen to have 4K drives, you might want to check out this blog post: https://ximalas.info/2012/01/11/new-server-and-first-attempt-at-running-freebsdamd64-with-zfs-for-all-storage/ -- +---++ | Vennlig hilsen, | Best regards, | | Trond Endrestøl, | Trond Endrestøl, | | IT-ansvarlig, | System administrator, | | Fagskolen Innlandet, | Gjøvik Technical College, Norway, | | tlf. mob. 952 62 567, | Cellular...: +47 952 62 567, | | sentralbord 61 14 54 00. | Switchboard: +47 61 14 54 00. | +---++___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org