Re: [offtopic] ZFS mirror install /mnt is empty

2013-05-15 Thread Trond Endrestøl
Am I the only one to receive these emails twice, delayed only by a 
couple of days since receiving the original emails?

Judging be the headers below this is either misconfiguration, a MITM 
attack or something else.

In the meantime I've rigged my mail server to reject anyting from 
mail{1,2}.ozon.ru and mx{1,2,3,4,5}.ozon.ru.

I apologise for the extra noise.

Return-Path: p...@kraus-haus.org
Received: from mail1.ozon.ru (mx4.ozon.ru [194.186.179.140])
by mail.fig.ol.no (8.14.7/8.14.7) with ESMTP id r4F5P8XU045283
for trond.endres...@fagskolen.gjovik.no; Wed, 15 May 2013 07:26:22 +0200 
(CEST)
(envelope-from p...@kraus-haus.org)
Received: from intmail03msk.ozon (intmail03msk.ozon [10.18.18.171])
by mail1.ozon.ru (Postfix) with ESMTP id 91DB871A683;
Wed, 15 May 2013 09:25:00 +0400 (MSK)
Received: from mail pickup service by intmail03msk.ozon with Microsoft SMTPSVC;
 Wed, 15 May 2013 09:09:42 +0400
Received: from intmail03msk.ozon ([10.18.18.171]) by intmail02msk.ozon with 
Microsoft SMTPSVC(6.0.3790.4675);
 Mon, 13 May 2013 23:03:59 +0400
Received: from mail1.ozon.ru ([194.186.179.140]) by intmail03msk.ozon with 
Microsoft SMTPSVC(6.0.3790.4675);
 Mon, 13 May 2013 17:38:23 +0400
Received: from localhost (localhost [127.0.0.1])
by mail1.ozon.ru (Postfix) with ESMTP id 01E2471A2AF
for rmilters...@ozon.ru; Mon, 13 May 2013 17:38:24 +0400 (MSK)
X-Virus-Scanned: amavisd-new at ozon.ru
Received: from mail1.ozon.ru ([127.0.0.1])
by localhost (mx4.ozon.ru [127.0.0.1]) (amavisd-new, port 10024)
with ESMTP id gePcZQ5jxUHB for rmilters...@ozon.ru;
Mon, 13 May 2013 17:38:15 +0400 (MSK)
X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6
Received-SPF: pass (freebsd.org: 8.8.178.116 is authorized to use 
'owner-freebsd-questi...@freebsd.org' in 'mfrom' identity (mechanism 
'ip4:8.8.178.116' matched)) receiver=mx4.ozon.ru; identity=mfrom;
envelope-from=owner-freebsd-questi...@freebsd.org; helo=mx2.freebsd.org; 
client-ip=8.8.178.116
Received: from mx2.freebsd.org (mx2.FreeBSD.org [8.8.178.116])
by mail1.ozon.ru (Postfix) with ESMTP id 1A8F571A29C
for rmilters...@ozon.ru; Mon, 13 May 2013 17:38:14 +0400 (MSK)
Received: from hub.freebsd.org (hub.freebsd.org 
[IPv6:2001:1900:2254:206c::16:88])
by mx2.freebsd.org (Postfix) with ESMTP id 0F2DB5D10;
Mon, 13 May 2013 13:38:12 + (UTC)
Received: from hub.freebsd.org (hub.freebsd.org 
[IPv6:2001:1900:2254:206c::16:88])
by hub.freebsd.org (Postfix) with ESMTP id 0C547F99;
Mon, 13 May 2013 13:38:12 + (UTC)
(envelope-from owner-freebsd-questi...@freebsd.org)
Delivered-To: freebsd-questions@freebsd.org
Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115])
 by hub.freebsd.org (Postfix) with ESMTP id BFA02F1D
 for freebsd-questions@freebsd.org; Mon, 13 May 2013 13:38:04 + (UTC)
 (envelope-from p...@kraus-haus.org)
Received: from mail-ve0-x22a.google.com (mail-ve0-x22a.google.com
 [IPv6:2607:f8b0:400c:c01::22a])
 by mx1.freebsd.org (Postfix) with ESMTP id 80032344
 for freebsd-questions@freebsd.org; Mon, 13 May 2013 13:38:04 + (UTC)
Received: by mail-ve0-f170.google.com with SMTP id 14so1764588vea.29
 for freebsd-questions@freebsd.org; Mon, 13 May 2013 06:38:04 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=google.com; s=20120113;
 h=x-received:subject:mime-version:content-type:from:in-reply-to:date
 :cc:content-transfer-encoding:message-id:references:to:x-mailer
 :x-gm-message-state;
 bh=fraUBdJHGprR0SIz026aV6gX1sxLt5mE/dRm08QHvPw=;
 b=R1PQ3JkT2kUn4rr6K5EDjUNtnMx6o1BYa8CdRiRs4o9G5ZK8kGjmgd9aQeAHbu8EC0
 6MSzHevF0eNaZG2N+GCGqUIko/YnY4Y1jh5NuUZ0lwlQR/LnrlLHeJw+gdzFlVHhg+f0
 AdeWkHamaqElHx1jP7mqDp/dB31asA7/fhTZZDm78NCbG42gUf3eGL/bE24Wqq/eznTj
 Zbemj5ndR6xrhuxZ0qGaO96FbygkSVwqcYl3kyVdNlQu195RlbOhNyZ9s+gg8vGbn2gA
 wUsP3vum/QV//qOGYPIrfoaaQFxXJdf6cMDhwS4zXWh/h6OIdCWQRfSfMpqlPRFzCioF
 B3BQ==
X-Received: by 10.52.155.141 with SMTP id vw13mr15269138vdb.43.1368452284000;
 Mon, 13 May 2013 06:38:04 -0700 (PDT)
Received: from [192.168.2.66] ([96.236.21.119])
 by mx.google.com with ESMTPSA id lb10sm12958692veb.5.2013.05.13.06.38.03
 for multiple recipients
 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
 Mon, 13 May 2013 06:38:03 -0700 (PDT)
Subject: Re: ZFS mirror install /mnt is empty
Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\))
From: Paul Kraus p...@kraus-haus.org
In-Reply-To: alpine.bsf.2.00.1305131522340.72...@mail.fig.ol.no
Date: Mon, 13 May 2013 09:38:02 -0400
Message-Id: 8c7a7e3a-355a-405f-840e-a60b4b6cb...@kraus-haus.org
References: 5190058d.2030...@micite.net
 alpine.bsf.2.00.1305130743320.72...@mail.fig.ol.no
 472e17af-b249-4fd3-8f5e-716f8b786...@kraus-haus.org
 alpine.bsf.2.00.1305131522340.72...@mail.fig.ol.no
To: =?iso-8859-1?Q?Trond_Endrest=F8l?= trond.endres...@fagskolen.gjovik.no
X-Mailer: Apple Mail (2.1503)
X-Gm-Message-State: 
ALoCoQlcUPYOxXwSVCSd0DNkAj6rgUfRwZEcezGYlS8MEaQMvM2pjeaHrTE4xzqIXEQy9UlLPanD
Cc: freebsd-questions

Re: ZFS mirror install /mnt is empty

2013-05-15 Thread Roland van Laar

On 13-05-13 07:58, Trond Endrestøl wrote:

On Sun, 12 May 2013 23:11+0200, Roland van Laar wrote:


Hello,

I followed these[1] step up to the Finishing touches.
I'm using a 9.1 Release.

After the install I go into the shell and /mnt is empty.
The mount command shows that the zfs partitions are mounted.
When I reboot the system it can't find the bootloader.

What can I do to fix this?

Thanks,

Roland van Laar

[1] https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE

Looking through the wiki notes I would do a couple of things in a
different way.

Since you're running 9.1-RELEASE you should take into account the need
for the /boot/zfs/zpool.cache file until 9.2-RELEASE exist or you
switch to the latest 9-STABLE.

Create your zpool using a command like this one:

zpool create -o cachefile=/tmp/zpool.cache -m /tmp/zroot zroot /dev/gpt/disk0

Copy the /tmp/zpool.cache file to /tmp/zroot/boot/zfs/zpool.cache, or
in your case to /mnt/boot/zfs/zpool.cache after extracting the base
and kernel stuff.

In the wiki section Finishing touches, perform step 4 before step 3.
The final command missing in step 3 should be zfs unmount -a once
more. Avoid step 5 at all cost!

Maybe this recipe is easier to follow, it sure works for 9.0-RELEASE
and 9.1-RELEASE, I only hope you're happy typing long commands, and
yes, command line editing is available in the shell:

https://ximalas.info/2011/10/17/zfs-root-fs-on-freebsd-9-0/


Thank you for that link. This worked (better).
I'm getting into the 'mountroot' shell during the boot. Oh well, I'm 
getting better at this.


The ZFS guides on the wiki leave you with a empty root zfs filesystem 
after the installation.
After I know a bit more about ZFS and why the FreeBSD wiki is wrong on 
ZFS installation I hope

to edit them.

Thank you all for your answers,

Regards,

Roland van Laar

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: [offtopic] ZFS mirror install /mnt is empty

2013-05-15 Thread Paul Kraus
I responded to Trond privately.

On May 15, 2013, at 2:25 AM, Trond Endrestøl 
trond.endres...@fagskolen.gjovik.no wrote:

 Am I the only one to receive these emails twice, delayed only by a 
 couple of days since receiving the original emails?
 
 Judging be the headers below this is either misconfiguration, a MITM 
 attack or something else.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: [offtopic] ZFS mirror install /mnt is empty

2013-05-15 Thread Shane Ambler

On 15/05/2013 15:55, Trond Endrestøl wrote:

Am I the only one to receive these emails twice, delayed only by a
couple of days since receiving the original emails?

Judging be the headers below this is either misconfiguration, a MITM
attack or something else.



yes I got a duplicate of the original message.
I just noticed that I also some got duplicates of pr responses.

In pr/178505 the closed message is listed before the commit which is 
time stamped just before the close and then there is a duplicate of my 
response listed after the commit.


Now I'm thinking it may be me, maybe my copy of thunderbird didn't save
the sent status and resent duplicates?


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS mirror install /mnt is empty

2013-05-14 Thread Paul Kraus
On May 14, 2013, at 12:10 AM, Shane Ambler free...@shaneware.biz wrote:

 When it comes to disk compression I think people overlook the fact that
 it can impact on more than one level.

Compression has effects at multiple levels:

1) CPU resources to compress (and decompress) the data
2) Disk space used
3) I/O to/from disks

 The size of disks these days means that compression doesn't make a big
 difference to storage capacity for most people and 4k blocks mean little
 change in final disk space used.

The 4K block issue is *huge* if the majority of your data is less than 
4K files. It is also large when you consider that a 5K file will not occupy 8K 
on disk. I am not a UFS on FreeBSD expert, but UFS on Solaris uses a default 
block size of 4K but has a fragment size of 1K. So files are stored on disk 
with 1K resolution (so to speak). By going to a 4K minimum block size you are 
forcing all data up to the next 4K boundary.

Now, if the majority of your data is in large files (1MB or more), then 
the 4K minimum black size probably gets lost in the noise.

The other factor is the actual compressibility of the data. Most media 
files (JPEG, MPEG, GIF, PNG, MP3, AAC, etc.) are already compressed and trying 
to compress them again is not likely to garner any real reduction inn size. In 
my experience with the default compression algorithm (lzjb), even uncompressed 
audio files (.AIFF or .WAV) do not compress enough to make the CPU overhead 
worthwhile.

 One thing people seem to miss is the fact that compressed files are
 going to reduce the amount of data sent through the bottle neck that is
 the wire between motherboard and drive. While a 3k file compressed to 1k
 still uses a 4k block on disk it does (should) reduce the true data
 transferred to disk. Given a 9.1 source tree using 865M, if it
 compresses to 400M then it is going to reduce the time to read the
 entire tree during compilation. This would impact a 32 thread build more
 than a 4 thread build.

If the data does not compress well, then you get hit with the CPU 
overhead of compression to no bandwidth or space benefit. How compressible is 
the source tree ? [Not a loaded question, I haven't tried to compress it]

 While it is said that compression adds little overhead, time wise,

Compression most certainly DOES add overhead in terms of time, based on 
the speed of your CPU and how busy your system is. My home server is an HP 
Proliant Micro with a dual core AMD N36 running at 1.3 GHz. Turning on 
compression hurts performance *if* I am getting less than 1.2:1 compression 
ratio (5 drive RAIDz2 of 1TB Enterprise disks). Above that the I/O bandwidth 
reduction due to the compression makes up for the lost CPU cycles. I have 
managed servers where each case prevailed… CPU limited so compression hurt 
performance and I/O limited where compression helped performance.

 it is
 going to take time to compress the data which is going to increase
 latency. Going from a 6ms platter disk latency to a 0.2ms SSD latency
 gives a noticeable improvement to responsiveness. Adding compression is
 going to bring that back up - possibly higher than 6ms.

Interesting point. I am not sure of the data flow through the code to 
know if compression has a defined latency component, or is just throughput 
limited by CPU cycles to do the compression.

 Together these two factors may level out the total time to read a file.
 
 One question there is whether the zfs cache uses compressed file data
 therefore keeping the latency while eliminating the bandwidth.

Data cached in the ZFS ARC or L2ARC is uncompressed. Data sent via zfs 
send / zfs receive is uncompressed; there had been talk of an option to send / 
receive compressed data, but I do not think it has gone anywhere.

 Personally I have compression turned off (desktop). My thought is that
 the latency added for compression would negate the bandwidth savings.
 
 For a file server I would consider turning it on as network overhead is
 going to hide the latency.

Once again, it all depends on the compressibility of the data, the 
available CPU resources, the speed of the CPU resources, and the I/O bandwidth 
to/from the drives.

Note also that RAIDz (RAIDz2, RAIDz3) have their own computational 
overhead, so compression may be a bigger advantage in this case than in the 
case of a mirror, as the RAID code will have less data to process after being 
compressed.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS mirror install /mnt is empty

2013-05-13 Thread Paul Kraus
On May 13, 2013, at 1:58 AM, Trond Endrestøl 
trond.endres...@fagskolen.gjovik.no wrote:

 Due to advances in hard drive technology, for the worse I'm afraid, 
 i.e. 4K disk blocks, I wouldn't bother enabling compression on any ZFS 
 file systems. I might change my blog posts to reflect this stop gap.
 
 If you do happen to have 4K drives, you might want to check out this 
 blog post:
 
 https://ximalas.info/2012/01/11/new-server-and-first-attempt-at-running-freebsdamd64-with-zfs-for-all-storage/

I did look, it doesn't explain why not to enable compression on 4k 
sector drives.

From discussion on the zfs-discuss lists (both the old one from 
OpenSolaris and the new one at Illumos) the only issue with 4K sector drives is 
mixing 0.5K sector and 4K sector drives. You can tunes the zpool offset to 
handle 4K sector drives just fine, but it is a pool wide tuning.

http://zfsday.com/wp-content/uploads/2012/08/Why-4k_.pdf has some 4K 
background, and the only mention I see of compression and 4K is that you may 
get less. But… you really need to test your data to see if turning compression 
on is beneficial with any dataset. There is noticeable computational overhead 
to enabling compression. If you are CPU bound, then you will get better 
performance with compression off. If you are limited by the I/O bandwidth to 
your drives, then *if* your data is highly compressible, then you will get 
better performance with compression on. I have managed large pools of both data 
that compresses well and data that does not.

http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks 
discusses the issue and presents solutions using Illumos. I could find no such 
examples for FreeBSD, but I'm sure some of the same techniques would work 
(manually setting the ashift to 12 for 4K disks).

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS mirror install /mnt is empty

2013-05-13 Thread Trond Endrestøl
On Mon, 13 May 2013 08:40-0400, Paul Kraus wrote:

 On May 13, 2013, at 1:58 AM, Trond Endrestøl 
 trond.endres...@fagskolen.gjovik.no wrote:
 
  Due to advances in hard drive technology, for the worse I'm afraid, 
  i.e. 4K disk blocks, I wouldn't bother enabling compression on any ZFS 
  file systems. I might change my blog posts to reflect this stop gap.
  
  If you do happen to have 4K drives, you might want to check out this 
  blog post:
  
  https://ximalas.info/2012/01/11/new-server-and-first-attempt-at-running-freebsdamd64-with-zfs-for-all-storage/

   I did look, it doesn't explain why not to enable compression on 4k 
 sector drives.

I guess it's due to my (mis)understanding that files shorter than 4KB 
stored on 4K drives never will be subject to compression. And as you 
state below, the degree of compression depends largely on the data at 
hand.
 
   From discussion on the zfs-discuss lists (both the old one from 
 OpenSolaris and the new one at Illumos) the only issue with 4K sector drives 
 is mixing 0.5K sector and 4K sector drives. You can tunes the zpool offset to 
 handle 4K sector drives just fine, but it is a pool wide tuning.
 
   http://zfsday.com/wp-content/uploads/2012/08/Why-4k_.pdf has some 4K 
 background, and the only mention I see of compression and 4K is that you may 
 get less. But? you really need to test your data to see if turning 
 compression on is beneficial with any dataset. There is noticeable 
 computational overhead to enabling compression. If you are CPU bound, then 
 you will get better performance with compression off. If you are limited by 
 the I/O bandwidth to your drives, then *if* your data is highly compressible, 
 then you will get better performance with compression on. I have managed 
 large pools of both data that compresses well and data that does not.
 
   http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks 
 discusses the issue and presents solutions using Illumos. I could find no 
 such examples for FreeBSD, but I'm sure some of the same techniques would 
 work (manually setting the ashift to 12 for 4K disks).
 
 --
 Paul Kraus
 Deputy Technical Director, LoneStarCon 3
 Sound Coordinator, Schenectady Light Opera Company

-- 
+---++
| Vennlig hilsen,   | Best regards,  |
| Trond Endrestøl,  | Trond Endrestøl,   |
| IT-ansvarlig, | System administrator,  |
| Fagskolen Innlandet,  | Gjøvik Technical College, Norway,  |
| tlf. mob.   952 62 567,   | Cellular...: +47 952 62 567,   |
| sentralbord 61 14 54 00.  | Switchboard: +47 61 14 54 00.  |
+---++___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

Re: ZFS mirror install /mnt is empty

2013-05-13 Thread Paul Kraus
On May 13, 2013, at 9:25 AM, Trond Endrestøl 
trond.endres...@fagskolen.gjovik.no wrote:
 
 I guess it's due to my (mis)understanding that files shorter than 4KB 
 stored on 4K drives never will be subject to compression. And as you 
 state below, the degree of compression depends largely on the data at 
 hand.

Not a misunderstanding at all. With a 4K minimum block size (which is 
what a 4K sector size implies), a file less than 4KB will not compress at all. 
While ZFS does have a variable block size (512B to 128KB), with a 4K minimum 
black size (just like with any fixed block FS with a 4KB block size), small 
files take up more pace than they should (a 1KB file takes up an entire 4KB 
block). This ends up being an artifact of the block size and not ZFS, any FS on 
a 4K sector drive will have similar behavior.

I leave compression off on most of my datasets, only turning it on on 
ones where I see a real benefit. /var compresses vert well (I turn off 
compression in /etc/newsyslog.conf and let ZFS compress even the current logs 
:-), I find that some VM's compress very well, media files do NOT compress very 
well (they tend to already be compressed), generic data compresses well, as do 
scanned documents (uncompressed PDFs). Your individual results will vary :-)

Also remember, if you start with compression on and after a while you 
are not seeing good compression ratios, go ahead and turn it off. The already 
written data will remain compressed but new writes will not be.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS mirror install /mnt is empty

2013-05-13 Thread Shane Ambler



On Mon, 13 May 2013 08:40-0400, Paul Kraus wrote:

On May 13, 2013, at 1:58 AM, Trond Endrestøl wrote:

Due to advances in hard drive technology, for the worse I'm
afraid, i.e. 4K disk blocks, I wouldn't bother enabling
compression on any ZFS file systems. I might change my blog posts
to reflect this stop gap.



I guess it's due to my (mis)understanding that files shorter than
4KB stored on 4K drives never will be subject to compression. And as
you state below, the degree of compression depends largely on the
data at hand.


I don't want to start a big discussion but want to express an opinion
that others may think about.

When it comes to disk compression I think people overlook the fact that
it can impact on more than one level.

The size of disks these days means that compression doesn't make a big
difference to storage capacity for most people and 4k blocks mean little
change in final disk space used.

One thing people seem to miss is the fact that compressed files are
going to reduce the amount of data sent through the bottle neck that is
the wire between motherboard and drive. While a 3k file compressed to 1k
still uses a 4k block on disk it does (should) reduce the true data
transferred to disk. Given a 9.1 source tree using 865M, if it
compresses to 400M then it is going to reduce the time to read the
entire tree during compilation. This would impact a 32 thread build more
than a 4 thread build.

While it is said that compression adds little overhead, time wise, it is
going to take time to compress the data which is going to increase
latency. Going from a 6ms platter disk latency to a 0.2ms SSD latency
gives a noticeable improvement to responsiveness. Adding compression is
going to bring that back up - possibly higher than 6ms.

Together these two factors may level out the total time to read a file.

One question there is whether the zfs cache uses compressed file data
therefore keeping the latency while eliminating the bandwidth.

Personally I have compression turned off (desktop). My thought is that
the latency added for compression would negate the bandwidth savings.

For a file server I would consider turning it on as network overhead is
going to hide the latency.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


ZFS mirror install /mnt is empty

2013-05-12 Thread Roland van Laar

Hello,

I followed these[1] step up to the Finishing touches.
I'm using a 9.1 Release.

After the install I go into the shell and /mnt is empty.
The mount command shows that the zfs partitions are mounted.
When I reboot the system it can't find the bootloader.

What can I do to fix this?

Thanks,

Roland van Laar

[1] https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS mirror install /mnt is empty

2013-05-12 Thread Trond Endrestøl
On Sun, 12 May 2013 23:11+0200, Roland van Laar wrote:

 Hello,
 
 I followed these[1] step up to the Finishing touches.
 I'm using a 9.1 Release.
 
 After the install I go into the shell and /mnt is empty.
 The mount command shows that the zfs partitions are mounted.
 When I reboot the system it can't find the bootloader.
 
 What can I do to fix this?
 
 Thanks,
 
 Roland van Laar
 
 [1] https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE

Looking through the wiki notes I would do a couple of things in a 
different way.

Since you're running 9.1-RELEASE you should take into account the need 
for the /boot/zfs/zpool.cache file until 9.2-RELEASE exist or you 
switch to the latest 9-STABLE.

Create your zpool using a command like this one:

zpool create -o cachefile=/tmp/zpool.cache -m /tmp/zroot zroot /dev/gpt/disk0

Copy the /tmp/zpool.cache file to /tmp/zroot/boot/zfs/zpool.cache, or 
in your case to /mnt/boot/zfs/zpool.cache after extracting the base 
and kernel stuff.

In the wiki section Finishing touches, perform step 4 before step 3. 
The final command missing in step 3 should be zfs unmount -a once 
more. Avoid step 5 at all cost!

Maybe this recipe is easier to follow, it sure works for 9.0-RELEASE 
and 9.1-RELEASE, I only hope you're happy typing long commands, and 
yes, command line editing is available in the shell:

https://ximalas.info/2011/10/17/zfs-root-fs-on-freebsd-9-0/

Due to advances in hard drive technology, for the worse I'm afraid, 
i.e. 4K disk blocks, I wouldn't bother enabling compression on any ZFS 
file systems. I might change my blog posts to reflect this stop gap.

If you do happen to have 4K drives, you might want to check out this 
blog post:

https://ximalas.info/2012/01/11/new-server-and-first-attempt-at-running-freebsdamd64-with-zfs-for-all-storage/

-- 
+---++
| Vennlig hilsen,   | Best regards,  |
| Trond Endrestøl,  | Trond Endrestøl,   |
| IT-ansvarlig, | System administrator,  |
| Fagskolen Innlandet,  | Gjøvik Technical College, Norway,  |
| tlf. mob.   952 62 567,   | Cellular...: +47 952 62 567,   |
| sentralbord 61 14 54 00.  | Switchboard: +47 61 14 54 00.  |
+---++___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org