Re: to gmirror or to ZFS

2013-07-22 Thread krad
But then zfs doesn't access every block on the disk does it, only the
allocated ones


On 20 July 2013 21:07, Daniel Feenberg feenb...@nber.org wrote:



 On Sat, 20 Jul 2013, Steve O'Hara-Smith wrote:

  On Sat, 20 Jul 2013 18:14:20 +0100
 Frank Leonhardt fra...@fjl.co.uk wrote:

  It's worth noting, as a warning for anyone who hasn't been there, that
 the number of times a second drive in a RAID system fails during a
 rebuild is higher than would be expected. During a rebuild the remaining
 drives get thrashed, hot, and if they're on the edge, that's when
 they're going to go. And at the most inconvenient time. Okay - obvious
 when you think about it, but this tends to be too late.


 Having the cabinet stuffed full of nominally identical drives
 bought at the same time from the same supplier tends to add to the
 probability that more than one drive is on the edge when one goes. It's a
 pity there are now only two manufacturers of spinning rust.


 Often this is presummed to be the reason for double failures close in
 time, also common mode failures such as environment, a defective power
 supply or excess voltage can be blamed. I have to think that the most
 common cause for a second failure soon after the first is that a failed
 drive often isn't detected until a particular sector is read or written.
 Since the resilvering reads and writes every sector on multiple disks,
 including unused sectors, it can detect latent problems that may have
 existed since the drive was new but which haven't been used for data yet,
 or have gone bad since the last write, but haven't been read since.

 The ZFS scrub processes only sectors with data, so it provides only
 partial protection against double failures.

 Daniel Feenberg
 NBER




 --
 Steve O'Hara-Smith st...@sohara.org
 __**_
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/**mailman/listinfo/freebsd-**questionshttp://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-**
 unsubscr...@freebsd.org freebsd-questions-unsubscr...@freebsd.org

  __**_
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/**mailman/listinfo/freebsd-**questionshttp://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-**
 unsubscr...@freebsd.org freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-22 Thread Shane Ambler

On 21/07/2013 17:31, Steve O'Hara-Smith wrote:

On Sun, 21 Jul 2013 14:13:39 +0930
Shane Ambler free...@shaneware.biz wrote:


On 21/07/2013 04:42, Steve O'Hara-Smith wrote:

It's a pity there are now only two manufacturers of spinning rust.


I thought there was three left - Seagate WD and Toshiba


I assumed Toshiba were out of the game, I've never seen anything
bigger than 500GB with a Toshiba label.



I have a 2.5 1TB Toshiba USB drive here.

I see Toshiba 2 and 3TB 3.5 listed online.

As I recall the Hitachi selloff - WD got the 2.5 Toshiba got the 3.5
I think the split was the only way to get the takeover approved.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-21 Thread Steve O'Hara-Smith
On Sun, 21 Jul 2013 14:13:39 +0930
Shane Ambler free...@shaneware.biz wrote:

 On 21/07/2013 04:42, Steve O'Hara-Smith wrote:
  It's a pity there are now only two manufacturers of spinning rust.
 
 I thought there was three left - Seagate WD and Toshiba

I assumed Toshiba were out of the game, I've never seen anything
bigger than 500GB with a Toshiba label.

-- 
Steve O'Hara-Smith  |   Directable Mirror Arrays
C:WIN  | A better way to focus the sun
The computer obeys and wins.|licences available see
You lose and Bill collects. |http://www.sohara.org/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-21 Thread Perry Hutchison
Steve O'Hara-Smith st...@sohara.org wrote:

 It's a pity there are now only two manufacturers of spinning rust.

I didn't think there were _any_!  Haven't oxide-coated platters gone
the way of the dodo bird?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-21 Thread Steve O'Hara-Smith
On Sun, 21 Jul 2013 00:27:01 -0700
per...@pluto.rain.com (Perry Hutchison) wrote:

 Steve O'Hara-Smith st...@sohara.org wrote:
 
  It's a pity there are now only two manufacturers of spinning rust.
 
 I didn't think there were _any_!  Haven't oxide-coated platters gone
 the way of the dodo bird?

Ah the technicalities, this is a software group :-)

-- 
Steve O'Hara-Smith  |   Directable Mirror Arrays
C:WIN  | A better way to focus the sun
The computer obeys and wins.|licences available see
You lose and Bill collects. |http://www.sohara.org/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-20 Thread Frank Leonhardt


On 16/07/2013 20:48, Charles Swiger wrote:

Hi--

On Jul 16, 2013, at 11:27 AM, Johan Hendriks joh.hendr...@gmail.com wrote:

Well, don't do that.  :-)

When the server reboots because of a powerfailure at night, then it boots.
Then it starts to rebuild the mirror on its own, and later the fsck kicks in.

Not much i can do about it.

Maybe i should have done it without the automatic attachment for a new device.

It's normally the case that getting a hot spare automatically attached should be
fine, but not if you also have the box go down entirely and need to fsck.

I'm more used to needing to explicitly physically swap out a failed mirror 
component,
in which case one can make sure the system is OK before the replacement drive 
goes in.

Agreed. Blaming gmirror for this kind of thing overlooks the overall 
design and operating procedures of the system, and assuming ZFS would 
have been any better may be wishful thinking. I've had plenty of gmirror 
crashes over the years, and they have all been recoverable. One thing I 
never allow it to do is to rebuild automatically. That's something for a 
human to initiate once the problem has been identified, and if it's 
flaky power in the data centre the job is postponed until I'm satisfied 
it's not going to drop during the rebuild. IME, one power failure is 
normally followed by several more.


It's worth noting, as a warning for anyone who hasn't been there, that 
the number of times a second drive in a RAID system fails during a 
rebuild is higher than would be expected. During a rebuild the remaining 
drives get thrashed, hot, and if they're on the edge, that's when 
they're going to go. And at the most inconvenient time. Okay - obvious 
when you think about it, but this tends to be too late.


Regards, Frank.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-20 Thread Steve O'Hara-Smith
On Sat, 20 Jul 2013 18:14:20 +0100
Frank Leonhardt fra...@fjl.co.uk wrote:

 It's worth noting, as a warning for anyone who hasn't been there, that 
 the number of times a second drive in a RAID system fails during a 
 rebuild is higher than would be expected. During a rebuild the remaining 
 drives get thrashed, hot, and if they're on the edge, that's when 
 they're going to go. And at the most inconvenient time. Okay - obvious 
 when you think about it, but this tends to be too late.

Having the cabinet stuffed full of nominally identical drives
bought at the same time from the same supplier tends to add to the
probability that more than one drive is on the edge when one goes. It's a
pity there are now only two manufacturers of spinning rust.

-- 
Steve O'Hara-Smith st...@sohara.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-20 Thread Daniel Feenberg



On Sat, 20 Jul 2013, Steve O'Hara-Smith wrote:


On Sat, 20 Jul 2013 18:14:20 +0100
Frank Leonhardt fra...@fjl.co.uk wrote:


It's worth noting, as a warning for anyone who hasn't been there, that
the number of times a second drive in a RAID system fails during a
rebuild is higher than would be expected. During a rebuild the remaining
drives get thrashed, hot, and if they're on the edge, that's when
they're going to go. And at the most inconvenient time. Okay - obvious
when you think about it, but this tends to be too late.


Having the cabinet stuffed full of nominally identical drives
bought at the same time from the same supplier tends to add to the
probability that more than one drive is on the edge when one goes. It's a
pity there are now only two manufacturers of spinning rust.


Often this is presummed to be the reason for double failures close in 
time, also common mode failures such as environment, a defective power 
supply or excess voltage can be blamed. I have to think that the most 
common cause for a second failure soon after the first is that a failed 
drive often isn't detected until a particular sector is read or written. 
Since the resilvering reads and writes every sector on multiple disks, 
including unused sectors, it can detect latent problems that may have 
existed since the drive was new but which haven't been used for data yet, 
or have gone bad since the last write, but haven't been read since.


The ZFS scrub processes only sectors with data, so it provides only 
partial protection against double failures.


Daniel Feenberg
NBER




--
Steve O'Hara-Smith st...@sohara.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-20 Thread Shane Ambler

On 21/07/2013 04:42, Steve O'Hara-Smith wrote:

It's a pity there are now only two manufacturers of spinning rust.


I thought there was three left - Seagate WD and Toshiba

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-19 Thread aurfalien

On Jul 16, 2013, at 11:42 AM, Warren Block wrote:

 On Tue, 16 Jul 2013, aurfalien wrote:
 On Jul 16, 2013, at 2:41 AM, Shane Ambler wrote:
 
 I doubt that you would save any ram having the os on a non-zfs drive as
 you will already be using zfs chances are that non-zfs drives would only
 increase ram usage by adding a second cache. zfs uses it's own cache
 system and isn't going to share it's cache with other system managed
 drives. I'm not actually certain if the system cache still sits above
 zfs cache or not, I think I read it bypasses the traditional drive cache.
 
 For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max
 that is a system wide setting and isn't going to increase if you have
 two zpools.
 
 Tip: set the arc_max value - by default zfs will use all physical ram
 for cache, set it to be sure you have enough ram left for any services
 you want running.
 
 Have you considered using one or both SSD drives with zfs? They can be
 added as cache or log devices to help performance.
 See man zpool under Intent Log and Cache Devices.
 
 This is a very interesting point.
 
 In terms if SSDs for cache, I was planning on using a pair of Samsung Pro 
 512GB SSDs for this purpose (which I haven't bought yet).
 
 But I tire of buying stuff, so I have a pair of 40GB Intel SSDs for use as 
 sys disks and several Intel 160GB SSDs lying around that I can combine with 
 the existing 256GB SSDs for a cache.
 
 Then use my 36x3TB for the beasty NAS.
 
 Agreed that 256G mirrored SSDs are kind of wasted as system drives.  The 40G 
 mirror sounds ideal.


Update;

I went with ZFS as I didn't want to confuse the toolset needed to support this 
server.  Although gmirror is not hard to figure out, I wanted consistency in 
systems.

So I've a booted 9.1 rel using a mirrored ZFS system disk.

The drives do support TRIM but am unsure how this plays with ZFS.  I did the 
standard partition scheme of;

root@kronos:/root # gpart show
=  34  78165293  da0  GPT  (37G)
34   1281  freebsd-boot  (64k)
   162 6   - free -  (3.0k)
   168   83886082  freebsd-swap  (4.0G)
   8388776  697765443  freebsd-zfs  (33G)
  78165320 7   - free -  (3.5k)

=  34  78165293  da1  GPT  (37G)
34   1281  freebsd-boot  (64k)
   162 6   - free -  (3.0k)
   168   83886082  freebsd-swap  (4.0G)
   8388776  697765443  freebsd-zfs  (33G)
  78165320 7   - free -  (3.5k)

At any rate, thank you for the replies, very much appreciate it.

Especially since building a rather large production worthy NAS not knowing a 
lick of freeBSD.

The reasons going with freeBSD are 2 fold;

ZFS stability,seems a better marriage then ZOL.
Correctly provides NFS pre attributes on write reply; mtime.  Linux does not.

While its a steep learning curve, the 2 points above require the use of freeBSD 
or alike.

- aurf
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-17 Thread krad
You would in theory as from what i remember every zfs filesystem takes up
64 kb of ram, so the savings could be massive 8)


On 16 July 2013 10:41, Shane Ambler free...@shaneware.biz wrote:

 On 16/07/2013 14:41, aurfalien wrote:


 On Jul 15, 2013, at 9:23 PM, Warren Block wrote:

  On Mon, 15 Jul 2013, aurfalien wrote:

  ... thats the question :)

 At any rate, I'm building a rather large 100+TB NAS using ZFS.

 However for my OS, should I also ZFS or simply gmirror as I've a
  dedicated pair of 256GB SSD drives for it.  I didn't ask for SSD
  sys drives, this system just came with em.

 This is more of a best practices q.


 ZFS has data integrity checking, gmirror has low RAM overhead.
 gmirror is, at present, restricted to MBR partitioning due to
 metadata conflicts with GPT, so 2TB is the maximum size.

 Best practices... depends on your use.  gmirror for the system
 leaves more RAM for ZFS.


 Perfect, thanks Warren.

 Just what I was looking for.


 I doubt that you would save any ram having the os on a non-zfs drive as
 you will already be using zfs chances are that non-zfs drives would only
 increase ram usage by adding a second cache. zfs uses it's own cache
 system and isn't going to share it's cache with other system managed
 drives. I'm not actually certain if the system cache still sits above
 zfs cache or not, I think I read it bypasses the traditional drive cache.

 For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max
 that is a system wide setting and isn't going to increase if you have
 two zpools.

 Tip: set the arc_max value - by default zfs will use all physical ram
 for cache, set it to be sure you have enough ram left for any services
 you want running.

 Have you considered using one or both SSD drives with zfs? They can be
 added as cache or log devices to help performance.
 See man zpool under Intent Log and Cache Devices.


 __**_
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/**mailman/listinfo/freebsd-**questionshttp://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-**
 unsubscr...@freebsd.org freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-17 Thread krad
not recommended anymore you should run SU+J if your version supports it


On 17 July 2013 00:08, Nikos Vassiliadis nv...@gmx.com wrote:

 On 07/16/13 21:27, Johan Hendriks wrote:

 Op dinsdag 16 juli 2013 schreef Charles Swiger (cswi...@mac.com) het
 volgende:

  Hi--

 On Jul 16, 2013, at 10:33 AM, Johan Hendriks joh.hendr...@gmail.com**
 javascript:;
 wrote:
 [ ... ]

 I would us a zfs for the os.
 I have a couple of servers that did not survive a power failure with
 gmirror.
 The problems i had was when the power failed one disk was in a
 rebuilding
 state and then when the background fsck started or was busy for some
 time
 it would crash the whole server.


 Well, don't do that.  :-)



 When the server reboots because of a powerfailure at night, then it boots.
 Then it starts to rebuild the mirror on its own, and later the fsck kicks
 in.

 Not much i can do about it.


 You could add geom_journal which will minimize the time of fsck to a
 second or something like that. Then you don't have to use background fsck
 anymore.

 Actually geom_journal's manual page mentions an interesting
 side-effect of geom_journal over a geom_mirror:

 you can turn off component synchronization.

 Geom_journal will re-play last writes so whatever was
 changed just before the crash will be re-written to both disks.
 I haven't used this but it makes sense in theory.


  Maybe i should have done it without the automatic attachment for a new
 device.


 I always turn off automatic synchronization or stale components
 as well.

 It seems to me that people don't really use geom_journal
 or maybe they just don't talk about it like it's some
 sort of secret:)

 just my two cents,

 Nikos


 __**_
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/**mailman/listinfo/freebsd-**questionshttp://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-**
 unsubscr...@freebsd.org freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-16 Thread Shane Ambler

On 16/07/2013 14:41, aurfalien wrote:


On Jul 15, 2013, at 9:23 PM, Warren Block wrote:


On Mon, 15 Jul 2013, aurfalien wrote:


... thats the question :)

At any rate, I'm building a rather large 100+TB NAS using ZFS.

However for my OS, should I also ZFS or simply gmirror as I've a
 dedicated pair of 256GB SSD drives for it.  I didn't ask for SSD
 sys drives, this system just came with em.

This is more of a best practices q.


ZFS has data integrity checking, gmirror has low RAM overhead.
gmirror is, at present, restricted to MBR partitioning due to
metadata conflicts with GPT, so 2TB is the maximum size.

Best practices... depends on your use.  gmirror for the system
leaves more RAM for ZFS.


Perfect, thanks Warren.

Just what I was looking for.


I doubt that you would save any ram having the os on a non-zfs drive as
you will already be using zfs chances are that non-zfs drives would only
increase ram usage by adding a second cache. zfs uses it's own cache
system and isn't going to share it's cache with other system managed
drives. I'm not actually certain if the system cache still sits above
zfs cache or not, I think I read it bypasses the traditional drive cache.

For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max
that is a system wide setting and isn't going to increase if you have
two zpools.

Tip: set the arc_max value - by default zfs will use all physical ram
for cache, set it to be sure you have enough ram left for any services
you want running.

Have you considered using one or both SSD drives with zfs? They can be
added as cache or log devices to help performance.
See man zpool under Intent Log and Cache Devices.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-16 Thread Frank Leonhardt

On 16/07/2013 10:41, Shane Ambler wrote:

On 16/07/2013 14:41, aurfalien wrote:


On Jul 15, 2013, at 9:23 PM, Warren Block wrote:


On Mon, 15 Jul 2013, aurfalien wrote:


... thats the question :)

At any rate, I'm building a rather large 100+TB NAS using ZFS.

However for my OS, should I also ZFS or simply gmirror as I've a
 dedicated pair of 256GB SSD drives for it.  I didn't ask for SSD
 sys drives, this system just came with em.

This is more of a best practices q.


ZFS has data integrity checking, gmirror has low RAM overhead.
gmirror is, at present, restricted to MBR partitioning due to
metadata conflicts with GPT, so 2TB is the maximum size.

Best practices... depends on your use.  gmirror for the system
leaves more RAM for ZFS.


Perfect, thanks Warren.

Just what I was looking for.


I doubt that you would save any ram having the os on a non-zfs drive as
you will already be using zfs chances are that non-zfs drives would only
increase ram usage by adding a second cache. zfs uses it's own cache
system and isn't going to share it's cache with other system managed
drives. I'm not actually certain if the system cache still sits above
zfs cache or not, I think I read it bypasses the traditional drive cache.

For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max
that is a system wide setting and isn't going to increase if you have
two zpools.

Tip: set the arc_max value - by default zfs will use all physical ram
for cache, set it to be sure you have enough ram left for any services
you want running.

Have you considered using one or both SSD drives with zfs? They can be
added as cache or log devices to help performance.
See man zpool under Intent Log and Cache Devices.

I agree with the sentiment of using the SSD as ZFS cache - it's possibly 
the only logical use for them.


I guess that with 100Tb worth of Winchesters you're not on a very tight 
budget, and not too tight on RAM for the OS either. If I was going to do 
this I'd stick with the OS on UFS and a gmirror because I simply don't 
trust ZFS. This is based on pure prejudice and inexperience.


I know how to arrange disks on a UNIX file system for performance - what 
to use for swap, where tmp files should go and so on. I also know where 
every file will be, physically, in the event of trouble. And here's the 
clincher: If the machine blows up I can simply take one of the mirrored 
drives, slap it in to some new hardware and I've got a very reasonable 
chance that it'll boot. Can I do this with ZFS? I get the feeling that 
the answer is an emphatic maybe.


So all things considered, I'd need a good reason not to stick with what 
I know works reliably and can be recovered in the event of a disaster 
(UFS), but I'm happy to watch and learn from everyone else's experience!


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-16 Thread Johan Hendriks
Op dinsdag 16 juli 2013 schreef Frank Leonhardt (fra...@fjl.co.uk) het
volgende:

 On 16/07/2013 10:41, Shane Ambler wrote:

 On 16/07/2013 14:41, aurfalien wrote:


 On Jul 15, 2013, at 9:23 PM, Warren Block wrote:

  On Mon, 15 Jul 2013, aurfalien wrote:

  ... thats the question :)

 At any rate, I'm building a rather large 100+TB NAS using ZFS.

 However for my OS, should I also ZFS or simply gmirror as I've a
  dedicated pair of 256GB SSD drives for it.  I didn't ask for SSD
  sys drives, this system just came with em.

 This is more of a best practices q.


 ZFS has data integrity checking, gmirror has low RAM overhead.
 gmirror is, at present, restricted to MBR partitioning due to
 metadata conflicts with GPT, so 2TB is the maximum size.

 Best practices... depends on your use.  gmirror for the system
 leaves more RAM for ZFS.


 Perfect, thanks Warren.

 Just what I was looking for.


 I doubt that you would save any ram having the os on a non-zfs drive as
 you will already be using zfs chances are that non-zfs drives would only
 increase ram usage by adding a second cache. zfs uses it's own cache
 system and isn't going to share it's cache with other system managed
 drives. I'm not actually certain if the system cache still sits above
 zfs cache or not, I think I read it bypasses the traditional drive cache.

 For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max
 that is a system wide setting and isn't going to increase if you have
 two zpools.

 Tip: set the arc_max value - by default zfs will use all physical ram
 for cache, set it to be sure you have enough ram left for any services
 you want running.

 Have you considered using one or both SSD drives with zfs? They can be
 added as cache or log devices to help performance.
 See man zpool under Intent Log and Cache Devices.

  I agree with the sentiment of using the SSD as ZFS cache - it's possibly
 the only logical use for them.

 I guess that with 100Tb worth of Winchesters you're not on a very tight
 budget, and not too tight on RAM for the OS either. If I was going to do
 this I'd stick with the OS on UFS and a gmirror because I simply don't
 trust ZFS. This is based on pure prejudice and inexperience.

 I know how to arrange disks on a UNIX file system for performance - what
 to use for swap, where tmp files should go and so on. I also know where
 every file will be, physically, in the event of trouble. And here's the
 clincher: If the machine blows up I can simply take one of the mirrored
 drives, slap it in to some new hardware and I've got a very reasonable
 chance that it'll boot. Can I do this with ZFS? I get the feeling that the
 answer is an emphatic maybe.

 So all things considered, I'd need a good reason not to stick with what I
 know works reliably and can be recovered in the event of a disaster (UFS),
 but I'm happy to watch and learn from everyone else's experience!


I would us a zfs for the os.
I have a couple of servers that did not survive a power failure with
gmirror.
The problems i had was when the power failed one disk was in a rebuilding
state and then when the background fsck started or was busy for some time
it would crash the whole server.
Removing the disk that was rebuilding resolved the issue.
This happened to me more than once.
Most of the times it worked as advertised but not always.

Before people tell me to use an UPS, i used a UPS but the damn thing gave
way itself.
Then after it came back from the warranty repair it gave way again.
Some times it came back right away, leaving some servers survive and some
in the state they where.
It was hard to find the cause in the beginning because of the fact some
servers did survive the power failure.
We did not suspect the UPS at first.

Anyway, gmirror did not work for me in all cases.
I am now running a few servers with a zfs root.
I did not have any problems with them till now (knock on wood).
Since reading that swap on zfs root can cause trouble i have a separate
freebsd-swap partition for the swap.

Gr
Johan




 __**_
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/**mailman/listinfo/freebsd-**questionshttp://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to 
 freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-16 Thread aurfalien

On Jul 16, 2013, at 2:41 AM, Shane Ambler wrote:

 On 16/07/2013 14:41, aurfalien wrote:
 
 On Jul 15, 2013, at 9:23 PM, Warren Block wrote:
 
 On Mon, 15 Jul 2013, aurfalien wrote:
 
 ... thats the question :)
 
 At any rate, I'm building a rather large 100+TB NAS using ZFS.
 
 However for my OS, should I also ZFS or simply gmirror as I've a
 dedicated pair of 256GB SSD drives for it.  I didn't ask for SSD
 sys drives, this system just came with em.
 
 This is more of a best practices q.
 
 ZFS has data integrity checking, gmirror has low RAM overhead.
 gmirror is, at present, restricted to MBR partitioning due to
 metadata conflicts with GPT, so 2TB is the maximum size.
 
 Best practices... depends on your use.  gmirror for the system
 leaves more RAM for ZFS.
 
 Perfect, thanks Warren.
 
 Just what I was looking for.
 
 I doubt that you would save any ram having the os on a non-zfs drive as
 you will already be using zfs chances are that non-zfs drives would only
 increase ram usage by adding a second cache. zfs uses it's own cache
 system and isn't going to share it's cache with other system managed
 drives. I'm not actually certain if the system cache still sits above
 zfs cache or not, I think I read it bypasses the traditional drive cache.
 
 For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max
 that is a system wide setting and isn't going to increase if you have
 two zpools.
 
 Tip: set the arc_max value - by default zfs will use all physical ram
 for cache, set it to be sure you have enough ram left for any services
 you want running.
 
 Have you considered using one or both SSD drives with zfs? They can be
 added as cache or log devices to help performance.
 See man zpool under Intent Log and Cache Devices.

This is a very interesting point.

In terms if SSDs for cache, I was planning on using a pair of Samsung Pro 512GB 
SSDs for this purpose (which I haven't bought yet).

But I tire of buying stuff, so I have a pair of 40GB Intel SSDs for use as sys 
disks and several Intel 160GB SSDs lying around that I can combine with the 
existing 256GB SSDs for a cache.

Then use my 36x3TB for the beasty NAS.

- aurf


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-16 Thread Johan Hendriks
Op dinsdag 16 juli 2013 schreef Charles Swiger (cswi...@mac.com) het
volgende:

 Hi--

 On Jul 16, 2013, at 10:33 AM, Johan Hendriks 
 joh.hendr...@gmail.comjavascript:;
 wrote:
 [ ... ]
  I would us a zfs for the os.
  I have a couple of servers that did not survive a power failure with
  gmirror.
  The problems i had was when the power failed one disk was in a rebuilding
  state and then when the background fsck started or was busy for some time
  it would crash the whole server.

 Well, don't do that.  :-)


When the server reboots because of a powerfailure at night, then it boots.
Then it starts to rebuild the mirror on its own, and later the fsck kicks
in.

Not much i can do about it.

Maybe i should have done it without the automatic attachment for a new
device.





 Seriously, bring up the box on one disk, force a foreground fsck if needed
 to get the filesystem to known clean state, and then rebuild the mirror.
 Mixing the mirror rebuild with something like an fsck will just thrash the
 disks.

 [ ... ]
  Before people tell me to use an UPS, i used a UPS but the damn thing gave
  way itself.  Then after it came back from the warranty repair it gave
 way again.

 Grr.  That's when you want find another UPS vendor.


Is apc not the right choice?
I think i got a monday morning model.
Some times things fail!




Regards,
 --
 -Chuck


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-16 Thread Warren Block

On Tue, 16 Jul 2013, aurfalien wrote:

On Jul 16, 2013, at 2:41 AM, Shane Ambler wrote:


I doubt that you would save any ram having the os on a non-zfs drive as
you will already be using zfs chances are that non-zfs drives would only
increase ram usage by adding a second cache. zfs uses it's own cache
system and isn't going to share it's cache with other system managed
drives. I'm not actually certain if the system cache still sits above
zfs cache or not, I think I read it bypasses the traditional drive cache.

For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max
that is a system wide setting and isn't going to increase if you have
two zpools.

Tip: set the arc_max value - by default zfs will use all physical ram
for cache, set it to be sure you have enough ram left for any services
you want running.

Have you considered using one or both SSD drives with zfs? They can be
added as cache or log devices to help performance.
See man zpool under Intent Log and Cache Devices.


This is a very interesting point.

In terms if SSDs for cache, I was planning on using a pair of Samsung Pro 512GB 
SSDs for this purpose (which I haven't bought yet).

But I tire of buying stuff, so I have a pair of 40GB Intel SSDs for use as sys 
disks and several Intel 160GB SSDs lying around that I can combine with the 
existing 256GB SSDs for a cache.

Then use my 36x3TB for the beasty NAS.


Agreed that 256G mirrored SSDs are kind of wasted as system drives.  The 
40G mirror sounds ideal.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-16 Thread Charles Swiger
Hi--

On Jul 16, 2013, at 10:33 AM, Johan Hendriks joh.hendr...@gmail.com wrote:
[ ... ]
 I would us a zfs for the os.
 I have a couple of servers that did not survive a power failure with
 gmirror.
 The problems i had was when the power failed one disk was in a rebuilding
 state and then when the background fsck started or was busy for some time
 it would crash the whole server.

Well, don't do that.  :-)

Seriously, bring up the box on one disk, force a foreground fsck if needed
to get the filesystem to known clean state, and then rebuild the mirror.
Mixing the mirror rebuild with something like an fsck will just thrash the 
disks.

[ ... ]
 Before people tell me to use an UPS, i used a UPS but the damn thing gave
 way itself.  Then after it came back from the warranty repair it gave way 
 again.

Grr.  That's when you want find another UPS vendor.

Regards,
-- 
-Chuck

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-16 Thread Charles Swiger
Hi--

On Jul 16, 2013, at 11:27 AM, Johan Hendriks joh.hendr...@gmail.com wrote:
 Well, don't do that.  :-)
 
 When the server reboots because of a powerfailure at night, then it boots.
 Then it starts to rebuild the mirror on its own, and later the fsck kicks in.
 
 Not much i can do about it.
 
 Maybe i should have done it without the automatic attachment for a new device.

It's normally the case that getting a hot spare automatically attached should be
fine, but not if you also have the box go down entirely and need to fsck.

I'm more used to needing to explicitly physically swap out a failed mirror 
component,
in which case one can make sure the system is OK before the replacement drive 
goes in.

 [ ... ]
 Before people tell me to use an UPS, i used a UPS but the damn thing gave
 way itself.  Then after it came back from the warranty repair it gave way 
 again.
 
 Grr.  That's when you want find another UPS vendor.
 
 Is apc not the right choice?
 I think i got a monday morning model.
 Some times things fail!

APC is decent for desktops, but I'm dubious about them when it comes to entire 
racks
or a DC.  I like Leviton's PDUs/MDUs and TVSS; for a medium-sized UPS (10-40 
kVA)
Liebert and PowerWare (now Eaton) were good.  Liebert's PDUs are also pretty 
good.

Regards,
-- 
-Chuck

PS: I ran a small DC in NYC with a 20kVA PowerWare 9330 behind a Leviton 57000 
TVSS;
the Cupertino locals have ~650kVA worth of Bloom boxes and a Cummins diesel 
genset
as a backup just for this building.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-16 Thread Nikos Vassiliadis

On 07/16/13 21:27, Johan Hendriks wrote:

Op dinsdag 16 juli 2013 schreef Charles Swiger (cswi...@mac.com) het
volgende:


Hi--

On Jul 16, 2013, at 10:33 AM, Johan Hendriks 
joh.hendr...@gmail.comjavascript:;
wrote:
[ ... ]

I would us a zfs for the os.
I have a couple of servers that did not survive a power failure with
gmirror.
The problems i had was when the power failed one disk was in a rebuilding
state and then when the background fsck started or was busy for some time
it would crash the whole server.


Well, don't do that.  :-)



When the server reboots because of a powerfailure at night, then it boots.
Then it starts to rebuild the mirror on its own, and later the fsck kicks
in.

Not much i can do about it.


You could add geom_journal which will minimize the time of fsck to a 
second or something like that. Then you don't have to use background 
fsck anymore.


Actually geom_journal's manual page mentions an interesting
side-effect of geom_journal over a geom_mirror:

you can turn off component synchronization.

Geom_journal will re-play last writes so whatever was
changed just before the crash will be re-written to both disks.
I haven't used this but it makes sense in theory.


Maybe i should have done it without the automatic attachment for a new
device.


I always turn off automatic synchronization or stale components
as well.

It seems to me that people don't really use geom_journal
or maybe they just don't talk about it like it's some
sort of secret:)

just my two cents,

Nikos

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


to gmirror or to ZFS

2013-07-15 Thread aurfalien
... thats the question :)

At any rate, I'm building a rather large 100+TB NAS using ZFS.

However for my OS, should I also ZFS or simply gmirror as I've a dedicated pair 
of 256GB SSD drives for it.  I didn't ask for SSD sys drives, this system just 
came with em.

This is more of a best practices q.

Thanks in advance,

- aurf
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-15 Thread Warren Block

On Mon, 15 Jul 2013, aurfalien wrote:


... thats the question :)

At any rate, I'm building a rather large 100+TB NAS using ZFS.

However for my OS, should I also ZFS or simply gmirror as I've a dedicated pair 
of 256GB SSD drives for it.  I didn't ask for SSD sys drives, this system just 
came with em.

This is more of a best practices q.


ZFS has data integrity checking, gmirror has low RAM overhead.  gmirror 
is, at present, restricted to MBR partitioning due to metadata conflicts 
with GPT, so 2TB is the maximum size.


Best practices... depends on your use.  gmirror for the system leaves 
more RAM for ZFS.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: to gmirror or to ZFS

2013-07-15 Thread aurfalien

On Jul 15, 2013, at 9:23 PM, Warren Block wrote:

 On Mon, 15 Jul 2013, aurfalien wrote:
 
 ... thats the question :)
 
 At any rate, I'm building a rather large 100+TB NAS using ZFS.
 
 However for my OS, should I also ZFS or simply gmirror as I've a dedicated 
 pair of 256GB SSD drives for it.  I didn't ask for SSD sys drives, this 
 system just came with em.
 
 This is more of a best practices q.
 
 ZFS has data integrity checking, gmirror has low RAM overhead.  gmirror is, 
 at present, restricted to MBR partitioning due to metadata conflicts with 
 GPT, so 2TB is the maximum size.
 
 Best practices... depends on your use.  gmirror for the system leaves more 
 RAM for ZFS.

Perfect, thanks Warren.

Just what I was looking for.

- aurf

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org