Re: ZFS install on a partition

2013-05-24 Thread Steve O'Hara-Smith
On Thu, 23 May 2013 11:00:21 +0200
Albert Shih albert.s...@obspm.fr wrote:

 Before I'm installing my server under 9.0 + ZFS I do some benchmarks with
 ionice to compare 
 
 FreeBSD 9.0+ ZFS + 12 disk SATA 7200 rpm vs CentOS + H700 + 12 disk
 SAS 15krpm
 
 (Both are same Dell poweredge).
 
 And the ZFS+12 disk sata goes much faster than CentOS+H700+ext4 almost
 everywhere. Only for small file AND small record size the ZFS is slower
 than CentOS. 

Hmm I wonder if that's mostly down to the SAS drives seeking faster
or between ZFS and ext4. The only real way to tell would be to give both
boxes the same kind of drives.

-- 
Steve O'Hara-Smith st...@sohara.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS install on a partition

2013-05-23 Thread Albert Shih
 Le 17/05/2013 ? 20:03:30-0400, Paul Kraus a écrit
 
 ZFS is stable, it is NOT as tuned as UFS just due to age. UFS in all of it's 
 various incarnations has been tuned far more than any filesystem has any 
 right to be. I spent many years managing Solaris system and I was truly 
 amazed at how tuned the Solaris version of UFS was.
 
 I have been running a number of 9.0 and 9.1 servers in production, all 
 running ZFS for both OS and data, with no FS related issues.

Have you ever try to update a ZFS Pool on 9.0 to 9.1 ? 


I've a server with a big zpool in 9.0 I'm wonder if it's good idea to
upgrade to 9.1. If I lost the data I'm  close to dead person. If I thinking
to upgrade to 9.1 it's because I got small issue about NFSD, LACP.

Regards.

JAS

-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
France
Téléphone : +33 1 45 07 76 26/+33 6 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
jeu 23 mai 2013 10:51:49 CEST
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS install on a partition

2013-05-23 Thread Albert Shih
 Le 18/05/2013 ? 09:02:15-0400, Paul Kraus a écrit
 On May 18, 2013, at 3:21 AM, Ivailo Tanusheff
 ivailo.tanush...@skrill.com wrote:
 
  If you use HBA/JBOD then you will rely on the software RAID of the
  ZFS system. Yes, this RAID is good, but unless you use SSD disks to
  boost performance and a lot of RAM the hardware raid should be more
  reliable and mush faster.
 
   Why will the hardware raid be more reliable ? While hardware raid is
   susceptible to uncorrectable errors from the physical drives
   (hardware raid controllers rely on the drives to report bad reads and
   writes), and the uncorrectable error rate for modern drives is such
   that with high capacity drives (1TB and over) you are almost certain
   to run into a couple over the operational life of the drive. 10^-14
   for cheap drives and 10^-15 for better drives, very occasionally I
   see a drive rated for 10^-16. Run the math and see how many TB worth
   of data you have to write and read (remember these failures are
   generally read failures with NO indication that a failure occurred,
   bad data is just returned to the system).
 
   In terms of performance HW raid is faster, generally due to the cache
   RAM built into the HW raid controller. ZFS makes good use of system,

Before I'm installing my server under 9.0 + ZFS I do some benchmarks with
ionice to compare 

FreeBSD 9.0+ ZFS + 12 disk SATA 7200 rpm vs CentOS + H700 + 12 disk SAS 
15krpm

(Both are same Dell poweredge).

And the ZFS+12 disk sata goes much faster than CentOS+H700+ext4 almost 
everywhere. Only
for small file AND small record size the ZFS is slower than CentOS. 

The server don't have SSD. He got 48Go of ram. 

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
France
Téléphone : +33 1 45 07 76 26/+33 6 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
jeu 23 mai 2013 10:53:50 CEST
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS install on a partition

2013-05-23 Thread Paul Kraus
On May 23, 2013, at 4:53 AM, Albert Shih albert.s...@obspm.fr wrote:

 Have you ever try to update a ZFS Pool on 9.0 to 9.1 ? 

I recently upgraded my home server from 9.0 to 9.1, actually, I did exported my 
data zpool (raidZ2), did a clean installation of 9.1, then imported my data 
zpool. Everything went perfectly. zpool upgrade did NOT indicate that there was 
a newer version of zpool so I did not even have to upgrade the on-disk zpool 
format (currently 28).

 I've a server with a big zpool in 9.0 I'm wonder if it's good idea to
 upgrade to 9.1. If I lost the data I'm  close to dead person. If I thinking
 to upgrade to 9.1 it's because I got small issue about NFSD, LACP.

My data zpool is not that big, only five 1TB drives in a raidZ2 for a net 
capacity of about 3TB, plus one 1TB hot spare.

My suggestion is to do the following (which is how I did the upgrade):

1) on a different physical system install 9.1, get the OS configured how you 
want it
2) on the production server, export the data zpool
3) shutdown the production server
4) remove the OS drives from the production server and replace with the drives 
you just installed 9.1 on
5) booth the production server with the 9.1 OS drives, make sure everything is 
working the way you want
6) import the data zpool

If the import fails, you can always put the 9.0 drives back in and get back up 
and running fairly quickly.

My system has the OS on a mirror zpool of two drives for just the OS.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS install on a partition

2013-05-19 Thread Paul Kraus
On May 18, 2013, at 10:16 PM, kpn...@pobox.com wrote:

 On Sat, May 18, 2013 at 01:29:58PM +, Ivailo Tanusheff wrote:

 Not sure about your calculations, hope you trust them, but in my previous 
 company we have a 3-4 months period when a disk fails almost every day on 2 
 year old servers, so trust me - I do NOT trust those calculations, as I've 
 seen the opposite. Maybe it was a failed batch of disk, shipped in the 
 country, but no one is insured against this. Yes, you can use several hot 
 spares on the software raid, but:
 
 What calculations are you talking about? He posted the uncorrectable read
 error probabilities manufacturers put into drive datasheets. The probability
 of a URE is distinct from and very different from the probability of the
 entire drive failing.

I think he is referring to the calculation I did based on uncorrectable 
error rate and whether you will run into that type of error over the life of 
the drive.

1 TB == 8,796,093,022,208 bits

10^15 (in bits) / 1 TB ~= 113.687

So if over the life of the drive you READ a TOTAL of 113.687 TB, then 
you will, statistically speaking, run into one uncorrectable read error and 
potentially return bad data to the application or OS. This does NOT scale with 
size of drive, it is the same for all drives with an uncorrectable error rate 
of 10^-15 bits. So if you read the entirety of a 1 TB drive 114 times or a 4 TB 
29 times you get the same result.

But this is a statistical probability, and some drives will have more 
(much more) uncorrectable errors and others will have less (much less), 
although I don't know if the distribution falls on a typical gaussian (bell) 
curve.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: ZFS install on a partition

2013-05-18 Thread Ivailo Tanusheff
Hi,

The overhead depends of the quantity of the changes you made since the oldest 
snapshot and the current data on the ZFS pool.
The snapshots keep only the differences between the live system and each other, 
so if you have made 10GB changes over the last 7 days and your oldest snapshot 
is 7 days old - then the overhead will be a little more than 10GB (because of 
the system info) :)
So this is very efficient way to make the things run.

Just keep in mind that having a lot of snapshots can decrease performance when 
you create/delete a snapshot, as the system should calculate the changes.

Best regards,
Ivailo Tanusheff

-Original Message-
From: owner-freebsd-questi...@freebsd.org 
[mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of b...@todoo.biz
Sent: Saturday, May 18, 2013 8:33 AM
To: Liste FreeBSD
Subject: Re: ZFS install on a partition


Le 18 mai 2013 à 06:49, kpn...@pobox.com a écrit :

 On Fri, May 17, 2013 at 08:03:30PM -0400, Paul Kraus wrote:
 On May 17, 2013, at 6:24 PM, b...@todoo.biz b...@todoo.biz wrote:
 3. Should I avoid using ZFS since my system is not well tuned and It would 
 be asking for trouble to use ZFS in these conditions. 
 
 No. One of the biggest benefits of ZFS is the end to end data integrity.
 IF there is a silent fault in the HW RAID (it happens), ZFS will 
 detect the corrupt data and note it. If you had a mirror or other 
 redundant device, ZFS would then read the data from the *other* copy 
 and rewrite the bad block (or mark that physical block bad and use another).
 
 I believe the copies=2 and copies=3 option exists to enable ZFS to 
 self heal despite ZFS not being in charge of RAID. If ZFS only has a 
 single LUN to work with, but the copies=2 or more option is set, then 
 if ZFS detects an error it can still correct it.
 
 This option is a dataset option, is inheritable by child datasets, and 
 can be changed at any time affecting data written after the change. To 
 get the full benefit you'll therefore want to set the option before 
 putting data into the relevant dataset.

Ok, good to know.
I planned to setup a consistent Snapshot policy and remote backup using zfs 
send / receive That should be enough for me. 

Is the overhead of this setup equal to double size used on disk ? 


 
 -- 
 Kevin P. Nealhttp://www.pobox.com/~kpn/
 
 Nonbelievers found it difficult to defend their position in \ 
the presense of a working computer. -- a DEC Jensen paper


«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - 

«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

PGP ID -- 0x1BA3C2FD

___
freebsd-questions@freebsd.org mailing list 
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: ZFS install on a partition

2013-05-18 Thread Ivailo Tanusheff
Hi,

If you use HBA/JBOD then you will rely on the software RAID of the ZFS system. 
Yes, this RAID is good, but unless you use SSD disks to boost performance and a 
lot of RAM the hardware raid should be more reliable and mush faster.
I didn't get if you want to use the system to dual boot Linux/FreeBSD or just 
to share FreeBSD space with linux.
But I would advise you to go with option 1 - you will get most of the system 
and obviously you don't need zpool with raid, as your LSI controller will do 
all the redundancy for you. Making software RAID over the hardware one will 
only decrease performance and will NOT increase the reliability, as you will 
not be sure which information is stored on which physical disk.

If stability is a MUST, then I will also advise you to go with bunch of pools 
and a disk designated as hot spare - in case some disk dies you will rely on 
the automation recovery. Also you should run monitoring tool on your raid 
controller.
You can also set copies=2/3 just in case some errors occur, so ZFS can 
auto0repair the data. if you run ZFS over several LUNs this will make even more 
sense. 

Best regards,
Ivailo Tanusheff

-Original Message-
From: owner-freebsd-questi...@freebsd.org 
[mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of b...@todoo.biz
Sent: Saturday, May 18, 2013 1:24 AM
To: Liste FreeBSD
Subject: ZFS install on a partition

Hi, 

I have a question regarding ZFS install on a system setup using an Intel 
Modular. 

This system runs various flavor of FreeBSD and Linux using a shared pool 
(LUNs). 
These LUNs have been configured in RAID 6 using the internal controller (LSI 
logic). 

So from the OS point of view there is just a volume available. 


I know I should install a system using HBA and JBOD configuration - but 
unfortunately this is not an option for this server. 

What would you advise ? 

1. Can I use an existing partition and setup ZFS on this partition using a 
standard Zpool (no RAID). 

2. Should I use any other solution in order to setup this (like full ZFS 
install on disk using the entire pool with ZFS). 

3. Should I avoid using ZFS since my system is not well tuned and It would be 
asking for trouble to use ZFS in these conditions. 


P.S. Stability is a must for this system - so I won't die if you answer 3 and 
tell me to keep on using UFS. 


Thanks. 


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: ZFS install on a partition

2013-05-18 Thread Ivailo Tanusheff
Hi,

If you go with RAID6 setup on your RAID I think you will not need spare so 
much, as you will actually have data redundancy distributed over 2 disks.
I think you can use 2 or 3 LUNS, just to have more flexibility in the solution, 
but it is not a must :)

For the usage of two copies on pool named mypool issue:
zfs set copies=2 mypool

Best regards,
Ivailo Tanusheff

-Original Message-
From: b...@todoo.biz [mailto:b...@todoo.biz] 
Sent: Saturday, May 18, 2013 10:46 AM
To: Ivailo Tanusheff
Subject: Re: ZFS install on a partition


Le 18 mai 2013 à 09:21, Ivailo Tanusheff ivailo.tanush...@skrill.com a écrit :

 Hi,
 
 If you use HBA/JBOD then you will rely on the software RAID of the ZFS system.

This is the config of my backup system - not the one I am planning to update. 

 Yes, this RAID is good, but unless you use SSD disks to boost performance and 
 a lot of RAM the hardware raid should be more reliable and mush faster.

Ok 

 I didn't get if you want to use the system to dual boot Linux/FreeBSD or just 
 to share FreeBSD space with linux.

Neither one ! 
I want to setup a full FreeBSD only system. 
Will be used to deploy jails. 

 But I would advise you to go with option 1 - you will get most of the system 
 and obviously you don't need zpool with raid, as your LSI controller will do 
 all the redundancy for you. Making software RAID over the hardware one will 
 only decrease performance and will NOT increase the reliability, as you will 
 not be sure which information is stored on which physical disk.

Ok

 
 If stability is a MUST, then I will also advise you to go with bunch of pools 
 and a disk designated as hot spare - in case some disk dies you will rely on 
 the automation recovery. Also you should run monitoring tool on your raid 
 controller.

I can't do that because of the design of the machine I will use. 
I only have LUN's available configured as volume on top of a RAID 6 pool of 
disks. 

This is presented as a block device to the system. 

 You can also set copies=2/3 just in case some errors occur, so ZFS can 
 auto0repair the data. if you run ZFS over several LUNs this will make even 
 more sense. 

Ok I'll try to figure out how to do that during install in order to have that 
as soon as possible during the system install. 


Thx. 

 
 Best regards,
 Ivailo Tanusheff
 
 -Original Message-
 From: owner-freebsd-questi...@freebsd.org 
 [mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of 
 b...@todoo.biz
 Sent: Saturday, May 18, 2013 1:24 AM
 To: Liste FreeBSD
 Subject: ZFS install on a partition
 
 Hi,
 
 I have a question regarding ZFS install on a system setup using an Intel 
 Modular. 
 
 This system runs various flavor of FreeBSD and Linux using a shared pool 
 (LUNs). 
 These LUNs have been configured in RAID 6 using the internal controller (LSI 
 logic). 
 
 So from the OS point of view there is just a volume available. 
 
 
 I know I should install a system using HBA and JBOD configuration - but 
 unfortunately this is not an option for this server. 
 
 What would you advise ? 
 
 1. Can I use an existing partition and setup ZFS on this partition using a 
 standard Zpool (no RAID). 
 
 2. Should I use any other solution in order to setup this (like full ZFS 
 install on disk using the entire pool with ZFS). 
 
 3. Should I avoid using ZFS since my system is not well tuned and It would be 
 asking for trouble to use ZFS in these conditions. 
 
 
 P.S. Stability is a must for this system - so I won't die if you answer 3 
 and tell me to keep on using UFS. 
 
 
 Thanks. 
 
 


«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - 

«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

PGP ID -- 0x1BA3C2FD



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS install on a partition

2013-05-18 Thread Paul Kraus
On May 18, 2013, at 3:21 AM, Ivailo Tanusheff ivailo.tanush...@skrill.com 
wrote:

 If you use HBA/JBOD then you will rely on the software RAID of the ZFS 
 system. Yes, this RAID is good, but unless you use SSD disks to boost 
 performance and a lot of RAM the hardware raid should be more reliable and 
 mush faster.

Why will the hardware raid be more reliable ? While hardware raid is 
susceptible to uncorrectable errors from the physical drives (hardware raid 
controllers rely on the drives to report bad reads and writes), and the 
uncorrectable error rate for modern drives is such that with high capacity 
drives (1TB and over) you are almost certain to run into a couple over the 
operational life of the drive. 10^-14 for cheap drives and 10^-15 for better 
drives, very occasionally I see a drive rated for 10^-16. Run the math and see 
how many TB worth of data you have to write and read (remember these failures 
are generally read failures with NO indication that a failure occurred, bad 
data is just returned to the system).

In terms of performance HW raid is faster, generally due to the cache 
RAM built into the HW raid controller. ZFS makes good use of system, RAM for 
the same function. An SSD can help with performance if the majority of writes 
are sync (NFS is a good example of this) or if you can benefit from a much 
larger read cache. SSDs are deployed with ZFS as either write LOG devices (in 
which case they should be mirrored), but they only come into play for SYNC 
writes; and as an extension of the ARC, the L2ARC, which does not have to be 
mirrored as it is only a cache of existing data for spying up reads.

 I didn't get if you want to use the system to dual boot Linux/FreeBSD or just 
 to share FreeBSD space with linux.
 But I would advise you to go with option 1 - you will get most of the system 
 and obviously you don't need zpool with raid, as your LSI controller will do 
 all the redundancy for you. Making software RAID over the hardware one will 
 only decrease performance and will NOT increase the reliability, as you will 
 not be sure which information is stored on which physical disk.
 
 If stability is a MUST, then I will also advise you to go with bunch of pools 
 and a disk designated as hot spare - in case some disk dies you will rely on 
 the automation recovery. Also you should run monitoring tool on your raid 
 controller.

I think you misunderstand the difference between stability and 
reliability. Any ZFS configuration I have tried on FreeBSD is STABLE, having 
redundant vdevs (mirrors or RAIDzn) along with hot spares can increase 
RELIABILITY. The only advantage to having a hot spare is that when a drive 
fails (and they all fail eventually), the REPLACE operation can start 
immediately without you noticing and manually replacing the failed drive.

Reliability is a combination of reduction in MTBF (mean time between 
failure) and MTTR (mean time to repair). Having a hot spare reduces the MTTR. 
The other way to improve MTTR is to go with smaller drives to recede the time 
it takes the system to resilver a failed drive. This is NOT applicable in the 
OP's situation. I try very hard not so use drives larger than 1TB because 
resilver times can be days. Resilver time also depends on the total size of the 
the data in a zpool, as a resolver operation walks the FS in time, replaying 
all the writes and confirming that all the data on disk is good (it does not 
actually rewrite the data unless it finds bad data). This means a couple 
things, the first of which is that the resilver time will be dependent on the 
amount of data you have written, not the capacity. A zppol with a capacity of 
multiple TB will resilver in seconds if there is only a few hundred MB written 
to it. Since the resilver operation is not just a block by block copy,
  but a replay, it is I/Ops limited not bandwidth limited. You might be able to 
stream sequential data from a drive at hundreds of MB/sec., but most SATA 
drives will not sustain more than one to two hundred RANDOM I/Ops (sequentially 
they can do much more).

 You can also set copies=2/3 just in case some errors occur, so ZFS can 
 auto0repair the data. if you run ZFS over several LUNs this will make even 
 more sense. 

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS install on a partition

2013-05-18 Thread Paul Kraus
On May 18, 2013, at 12:49 AM, kpn...@pobox.com wrote:

 On Fri, May 17, 2013 at 08:03:30PM -0400, Paul Kraus wrote:
 On May 17, 2013, at 6:24 PM, b...@todoo.biz b...@todoo.biz wrote:
 3. Should I avoid using ZFS since my system is not well tuned and It would 
 be asking for trouble to use ZFS in these conditions. 
 
 No. One of the biggest benefits of ZFS is the end to end data integrity.
 IF there is a silent fault in the HW RAID (it happens), ZFS will detect
 the corrupt data and note it. If you had a mirror or other redundant device,
 ZFS would then read the data from the *other* copy and rewrite the bad
 block (or mark that physical block bad and use another).
 
 I believe the copies=2 and copies=3 option exists to enable ZFS to
 self heal despite ZFS not being in charge of RAID. If ZFS only has a single
 LUN to work with, but the copies=2 or more option is set, then if ZFS
 detects an error it can still correct it.

Yes, but …. What the copies=n parameter does is tell ZFS to make 
that many copies of every block written on the top level device. So if you set 
copies=2 and then write a 2MB file, it will take up 4MB of space since ZFS will 
keep two copies of it. ZFS will attempt to put them on different devices if it 
can, but there are no guarantees here. If you have a single vdev stripe and you 
lose that one device, you *will* lose all your data (assuming you did not have 
another backup copy someplace else). On the other hand, if the single device 
develops some bad blocks, with copies=2 you will *probably* not lose data as 
there will be other copies of those disk blocks elsewhere to recover from.

From my experience on the ZFS Discuss lists, the place people seem to 
use copies=more than 1 are on laptops where they only have one drive and 
copies=more than1 is better than no protection at all, it is just not 
complete protection.

 This option is a dataset option, is inheritable by child datasets, and can
 be changed at any time affecting data written after the change. To get the
 full benefit you'll therefore want to set the option before putting data
 into the relevant dataset.

You can change it any time and it will only effect data written from 
that point on. This can be useful if you have both high value data band low 
value and you can control when each is written. For example, you leave copies=1 
for most of the time, then you want to save your wedding photos, so you set 
copies=3 and write all the wedding photos, you then set copies=1. You will have 
three copies of the wedding photos and one copy of everything else.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: ZFS install on a partition

2013-05-18 Thread Ivailo Tanusheff
The software RAID depends not only from the disks, but also from the changes on 
the OS, which will occur more frequently than an update of the firmware of the 
raid controller. So that makes the hardware raid more stable and reliable.
Also the resources of the hardware raid are exclusively used by the raid 
controller, which is not true for a software raid.
So I do not get your point of appointing that a software raid is same/better 
than the hardware one.

About the second part - I point over both stability and reliability. Having a 
spare disk reduces the risk as the recovery operation will start as soon as a 
disk fails. It may sound paranoid, but still the possibility of a failing disk 
which is detected after 8, 12 or even 24 hours is pretty big.
Not sure about your calculations, hope you trust them, but in my previous 
company we have a 3-4 months period when a disk fails almost every day on 2 
year old servers, so trust me - I do NOT trust those calculations, as I've seen 
the opposite. Maybe it was a failed batch of disk, shipped in the country, but 
no one is insured against this. Yes, you can use several hot spares on the 
software raid, but:
1. You still depend on the problems, related to the OS.
2. If you read what the mate asking has written - you will see that is not 
possible for him.

I agree on the mentioned about recovering bid chunks of data, that's why I 
suggested that he uses several smaller LUNs for the zpool.

Best regards,
Ivailo Tanusheff

-Original Message-
From: owner-freebsd-questi...@freebsd.org 
[mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Paul Kraus
Sent: Saturday, May 18, 2013 4:02 PM
To: Ivailo Tanusheff
Cc: Liste FreeBSD
Subject: Re: ZFS install on a partition

On May 18, 2013, at 3:21 AM, Ivailo Tanusheff ivailo.tanush...@skrill.com 
wrote:

 If you use HBA/JBOD then you will rely on the software RAID of the ZFS 
 system. Yes, this RAID is good, but unless you use SSD disks to boost 
 performance and a lot of RAM the hardware raid should be more reliable and 
 mush faster.

Why will the hardware raid be more reliable ? While hardware raid is 
susceptible to uncorrectable errors from the physical drives (hardware raid 
controllers rely on the drives to report bad reads and writes), and the 
uncorrectable error rate for modern drives is such that with high capacity 
drives (1TB and over) you are almost certain to run into a couple over the 
operational life of the drive. 10^-14 for cheap drives and 10^-15 for better 
drives, very occasionally I see a drive rated for 10^-16. Run the math and see 
how many TB worth of data you have to write and read (remember these failures 
are generally read failures with NO indication that a failure occurred, bad 
data is just returned to the system).

In terms of performance HW raid is faster, generally due to the cache 
RAM built into the HW raid controller. ZFS makes good use of system, RAM for 
the same function. An SSD can help with performance if the majority of writes 
are sync (NFS is a good example of this) or if you can benefit from a much 
larger read cache. SSDs are deployed with ZFS as either write LOG devices (in 
which case they should be mirrored), but they only come into play for SYNC 
writes; and as an extension of the ARC, the L2ARC, which does not have to be 
mirrored as it is only a cache of existing data for spying up reads.

 I didn't get if you want to use the system to dual boot Linux/FreeBSD or just 
 to share FreeBSD space with linux.
 But I would advise you to go with option 1 - you will get most of the system 
 and obviously you don't need zpool with raid, as your LSI controller will do 
 all the redundancy for you. Making software RAID over the hardware one will 
 only decrease performance and will NOT increase the reliability, as you will 
 not be sure which information is stored on which physical disk.
 
 If stability is a MUST, then I will also advise you to go with bunch of pools 
 and a disk designated as hot spare - in case some disk dies you will rely on 
 the automation recovery. Also you should run monitoring tool on your raid 
 controller.

I think you misunderstand the difference between stability and 
reliability. Any ZFS configuration I have tried on FreeBSD is STABLE, having 
redundant vdevs (mirrors or RAIDzn) along with hot spares can increase 
RELIABILITY. The only advantage to having a hot spare is that when a drive 
fails (and they all fail eventually), the REPLACE operation can start 
immediately without you noticing and manually replacing the failed drive.

Reliability is a combination of reduction in MTBF (mean time between 
failure) and MTTR (mean time to repair). Having a hot spare reduces the MTTR. 
The other way to improve MTTR is to go with smaller drives to recede the time 
it takes the system to resilver a failed drive. This is NOT applicable in the 
OP's situation. I try very hard not so use drives larger than 1TB because

ZFS install on a partition

2013-05-17 Thread b...@todoo.biz
Hi, 

I have a question regarding ZFS install on a system setup using an Intel 
Modular. 

This system runs various flavor of FreeBSD and Linux using a shared pool 
(LUNs). 
These LUNs have been configured in RAID 6 using the internal controller (LSI 
logic). 

So from the OS point of view there is just a volume available. 


I know I should install a system using HBA and JBOD configuration - but 
unfortunately this is not an option for this server. 

What would you advise ? 

1. Can I use an existing partition and setup ZFS on this partition using a 
standard Zpool (no RAID). 

2. Should I use any other solution in order to setup this (like full ZFS 
install on disk using the entire pool with ZFS). 

3. Should I avoid using ZFS since my system is not well tuned and It would be 
asking for trouble to use ZFS in these conditions. 


P.S. Stability is a must for this system - so I won't die if you answer 3 and 
tell me to keep on using UFS. 


Thanks. 



«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD -

«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

PGP ID -- 0x1BA3C2FD

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS install on a partition

2013-05-17 Thread Joshua Isom
Your hardware raid should be faster than ZFS raid.  Don't use zfs raid 
because there will be no benefit.  You'll get the performance of 
software raid using CPU time, along with lost space for already backed 
up data.


ZFS should work fine.  A lot of the tuning on the wiki page isn't needed 
anymore, so it's not too bad.  The biggest thing to be careful with is 
upgrading your zpool, every so often your boot blocks may need updated 
and if you forget, you can't boot.  You won't upgrade your pool often of 
course.  Reliability shouldn't be an issue, it's FreeBSD.  ZFS will make 
it easier to play around with jails, have fun and create a 1000 node 
beowulf on one system.


On 5/17/2013 5:24 PM, b...@todoo.biz wrote:

Hi,

I have a question regarding ZFS install on a system setup using an Intel 
Modular.

This system runs various flavor of FreeBSD and Linux using a shared pool (LUNs).
These LUNs have been configured in RAID 6 using the internal controller (LSI 
logic).

So from the OS point of view there is just a volume available.


I know I should install a system using HBA and JBOD configuration - but 
unfortunately this is not an option for this server.

What would you advise ?

1. Can I use an existing partition and setup ZFS on this partition using a 
standard Zpool (no RAID).

2. Should I use any other solution in order to setup this (like full ZFS 
install on disk using the entire pool with ZFS).

3. Should I avoid using ZFS since my system is not well tuned and It would be 
asking for trouble to use ZFS in these conditions.


P.S. Stability is a must for this system - so I won't die if you answer 3 and 
tell me to keep on using UFS.


Thanks.



«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD -

«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

PGP ID -- 0x1BA3C2FD

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS install on a partition

2013-05-17 Thread Paul Kraus
On May 17, 2013, at 6:24 PM, b...@todoo.biz b...@todoo.biz wrote:

 I know I should install a system using HBA and JBOD configuration - but 
 unfortunately this is not an option for this server. 

I ran many ZFS pools on top of hardware raid units, because that is what we 
had. It works fine and the NVRAM write cache of the better hardware raid 
systems give you a performance boost.

 What would you advise ? 
 
 1. Can I use an existing partition and setup ZFS on this partition using a 
 standard Zpool (no RAID). 

Sure. Be careful when you say RAID… I assume you mean RAIDzn configured top 
level vdevs. Remember, a mirror is RAID-1 and the base ZFS striping is 
considered RAID-0. So set it up as plain stripe of one vdev :-)

 2. Should I use any other solution in order to setup this (like full ZFS 
 install on disk using the entire pool with ZFS). 

If the system is configured with existing LUNS use them.

 3. Should I avoid using ZFS since my system is not well tuned and It would be 
 asking for trouble to use ZFS in these conditions. 

No. One of the biggest benefits of ZFS is the end to end data integrity. IF 
there is a silent fault in the HW RAID (it happens), ZFS will detect the 
corrupt data and note it. If you had a mirror or other redundant device, ZFS 
would then read the data from the *other* copy and rewrite the bad block (or 
mark that physical block bad and use another).

 P.S. Stability is a must for this system - so I won't die if you answer 3 
 and tell me to keep on using UFS. 

ZFS is stable, it is NOT as tuned as UFS just due to age. UFS in all of it's 
various incarnations has been tuned far more than any filesystem has any right 
to be. I spent many years managing Solaris system and I was truly amazed at how 
tuned the Solaris version of UFS was.

I have been running a number of 9.0 and 9.1 servers in production, all running 
ZFS for both OS and data, with no FS related issues.

 
 
 Thanks. 
 
 
 
 «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§
 
 BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD -
 
 «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§
 
 PGP ID -- 0x1BA3C2FD
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS install on a partition

2013-05-17 Thread Damien Fleuriot

On 18 May 2013, at 01:15, Joshua Isom jri...@gmail.com wrote:

 Your hardware raid should be faster than ZFS raid.  Don't use zfs raid 
 because there will be no benefit.  


Self healing much ?

I wouldn't dream of dropping it for a 20mb/s performance increase from a HW 
controller.

What if the controller derps and writes bad data ?



 You'll get the performance of software raid using CPU time, along with lost 
 space for already backed up data.
 
 ZFS should work fine.  A lot of the tuning on the wiki page isn't needed 
 anymore, so it's not too bad.  The biggest thing to be careful with is 
 upgrading your zpool, every so often your boot blocks may need updated and if 
 you forget, you can't boot.  You won't upgrade your pool often of course.  
 Reliability shouldn't be an issue, it's FreeBSD.  ZFS will make it easier to 
 play around with jails, have fun and create a 1000 node beowulf on one system.
 
 On 5/17/2013 5:24 PM, b...@todoo.biz wrote:
 Hi,
 
 I have a question regarding ZFS install on a system setup using an Intel 
 Modular.
 
 This system runs various flavor of FreeBSD and Linux using a shared pool 
 (LUNs).
 These LUNs have been configured in RAID 6 using the internal controller (LSI 
 logic).
 
 So from the OS point of view there is just a volume available.
 
 
 I know I should install a system using HBA and JBOD configuration - but 
 unfortunately this is not an option for this server.
 
 What would you advise ?
 
 1. Can I use an existing partition and setup ZFS on this partition using a 
 standard Zpool (no RAID).
 
 2. Should I use any other solution in order to setup this (like full ZFS 
 install on disk using the entire pool with ZFS).
 
 3. Should I avoid using ZFS since my system is not well tuned and It would 
 be asking for trouble to use ZFS in these conditions.
 
 
 P.S. Stability is a must for this system - so I won't die if you answer 3 
 and tell me to keep on using UFS.
 
 
 Thanks.
 
 
 
 «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§
 
 BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD -
 
 «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§
 
 PGP ID -- 0x1BA3C2FD
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

Re: ZFS install on a partition

2013-05-17 Thread b...@todoo.biz
Thanks for this documented answer. 

Couple of comments though… 

Le 18 mai 2013 à 02:03, Paul Kraus p...@kraus-haus.org a écrit :

 On May 17, 2013, at 6:24 PM, b...@todoo.biz b...@todoo.biz wrote:
 
 I know I should install a system using HBA and JBOD configuration - but 
 unfortunately this is not an option for this server. 
 
 I ran many ZFS pools on top of hardware raid units, because that is what we 
 had. It works fine and the NVRAM write cache of the better hardware raid 
 systems give you a performance boost.
 
 What would you advise ? 
 
 1. Can I use an existing partition and setup ZFS on this partition using a 
 standard Zpool (no RAID). 
 
 Sure. Be careful when you say RAID… I assume you mean RAIDzn configured top 
 level vdevs. Remember, a mirror is RAID-1 and the base ZFS striping is 
 considered RAID-0. So set it up as plain stripe of one vdev :-)

Ok so I'll use a dedicated volume (LUN) and install it as a RAID-0 vdev. 

 
 2. Should I use any other solution in order to setup this (like full ZFS 
 install on disk using the entire pool with ZFS). 
 
 If the system is configured with existing LUNS use them.
 
 3. Should I avoid using ZFS since my system is not well tuned and It would 
 be asking for trouble to use ZFS in these conditions. 
 
 No. One of the biggest benefits of ZFS is the end to end data integrity. IF 
 there is a silent fault in the HW RAID (it happens), ZFS will detect the 
 corrupt data and note it. If you had a mirror or other redundant device, ZFS 
 would then read the data from the *other* copy and rewrite the bad block (or 
 mark that physical block bad and use another).
 
 P.S. Stability is a must for this system - so I won't die if you answer 3 
 and tell me to keep on using UFS. 
 
 ZFS is stable, it is NOT as tuned as UFS just due to age. UFS in all of it's 
 various incarnations has been tuned far more than any filesystem has any 
 right to be. I spent many years managing Solaris system and I was truly 
 amazed at how tuned the Solaris version of UFS was.
 
 I have been running a number of 9.0 and 9.1 servers in production, all 
 running ZFS for both OS and data, with no FS related issues.

Ok - great answer. 

I have setup a FreeNAS ZFS appliance (running native HBAs + JBOD) and used it 
as a backup solution using snapshots. 
This is why I wanted to have ZFS at first. 


If you have any other advise - they are welcome. 



Thanks a lot. 

GB. 


 
 
 
 Thanks. 
 
 
 
 «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§
 
 BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD -
 
 «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§
 
 PGP ID -- 0x1BA3C2FD
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
 
 --
 Paul Kraus
 Deputy Technical Director, LoneStarCon 3
 Sound Coordinator, Schenectady Light Opera Company
 


«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD -

«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

PGP ID -- 0x1BA3C2FD

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS install on a partition

2013-05-17 Thread b...@todoo.biz

Le 18 mai 2013 à 06:49, kpn...@pobox.com a écrit :

 On Fri, May 17, 2013 at 08:03:30PM -0400, Paul Kraus wrote:
 On May 17, 2013, at 6:24 PM, b...@todoo.biz b...@todoo.biz wrote:
 3. Should I avoid using ZFS since my system is not well tuned and It would 
 be asking for trouble to use ZFS in these conditions. 
 
 No. One of the biggest benefits of ZFS is the end to end data integrity.
 IF there is a silent fault in the HW RAID (it happens), ZFS will detect
 the corrupt data and note it. If you had a mirror or other redundant device,
 ZFS would then read the data from the *other* copy and rewrite the bad
 block (or mark that physical block bad and use another).
 
 I believe the copies=2 and copies=3 option exists to enable ZFS to
 self heal despite ZFS not being in charge of RAID. If ZFS only has a single
 LUN to work with, but the copies=2 or more option is set, then if ZFS
 detects an error it can still correct it.
 
 This option is a dataset option, is inheritable by child datasets, and can
 be changed at any time affecting data written after the change. To get the
 full benefit you'll therefore want to set the option before putting data
 into the relevant dataset.

Ok, good to know.
I planned to setup a consistent Snapshot policy and remote backup using zfs 
send / receive 
That should be enough for me… 

Is the overhead of this setup equal to double size used on disk ? 


 
 -- 
 Kevin P. Nealhttp://www.pobox.com/~kpn/
 
 Nonbelievers found it difficult to defend their position in \ 
the presense of a working computer. -- a DEC Jensen paper


«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD -

«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§

PGP ID -- 0x1BA3C2FD

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org