Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-17 Thread Florin Iucha
On Wed, Jul 09, 2008 at 11:18:23PM -0500, Florin Iucha wrote:
 On Wed, Jul 09, 2008 at 06:02:24PM -0700, Brandon High wrote:
  Here's the component list that I'm planning to use right now:
  http://secure.newegg.com/WishList/PublicWishDetail.aspx?Source=MSWDWishListNumber=7739092
 
  this looks interesting:
 
http://www.addonics.com/products/flash_memory_reader/ad2sahdcf.asp
 
 as it has hardware mirroring.  Not sure what the error reporting
 through the OS is though..., but I hope I don't have to find out.
 
 For the Compact Flash I would spring for the industrial grade:
 

 http://www.hitechvendors.com/showproduct.aspx?ProductID=4885SEName=transcend-4gb-100x-industrial-cf-card-udma4-mode

I got the adapter and the flash, and SXCE 93 does not like them.

When I have the adapter set to RAID1, the installer sees the device
and it hangs afterwards, but the spinner is active.  I let it for 30
minutes and it's still spinning.  If I take one of the cards out and
set it up in concatenation, the installer gives an AHCI taskfile
error and proceeds, but later on 'format' does not see the disk.

Centos saw the AHCI southbridge controller and CF-SATA adapter without
problem and it installed quite happily.

I guess no ZFS for me... 8-((

florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpm9kQjCIWJN.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-11 Thread Ross
It was posted in the CIFS forum a couple of days ago:
http://www.opensolaris.org/jive/forum.jspa?forumID=214

Thread: HEADS-UP: Please skip snv_93 if you use CIFS server:
http://www.opensolaris.org/jive/thread.jspa?threadID=65996tstart=0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-11 Thread Brandon High
On Thu, Jul 10, 2008 at 1:15 AM, Fajar A. Nugraha [EMAIL PROTECTED] wrote:
 Another alternative is to use an IDE to Compact Flash adapter, and
 boot off of flash.

 Just curious, what will that flash contain?
 e.g. will it be similar to linux's /boot, or will it contain the full
 solaris root?
 How do you manage redundancy (e.g. mirror) for that boot device?

4gb is enough to hold a minimal system install. /var will go to a file
system on the raidz pool.

ZFS mirroring can be used on boot devices for redundancy.

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-10 Thread Ross
My recommendation:  buy a small, cheap 2.5 SATA hard drive (or 1.8 SSD) and 
use that as your boot volume, I'd even bolt it to the side of your case if you 
have to.  Then use the whole of your three large disks as a raid-z set.

If I were in your shoes I would also have bought 4 drives for ZFS instead of 3, 
and gone for raid-z2.  

And finally, I don't know how much room you have in your current case, but if 
you're ever looking for one that takes more drives I can highly recommend the 
Antec P182.  I've got 6x 1TB drives in my home server and in that case it's so 
quiet I can't even hear it turn on.  My watch ticking easily drowns out this 
server.

PS.  If you're going to be using CIFS, avoid build 93.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-10 Thread Fajar A. Nugraha

Brandon High wrote:

On Wed, Jul 9, 2008 at 3:37 PM, Florin Iucha [EMAIL PROTECTED] wrote:
  

The question is, how should I partition the drives, and what tuning
parameters should I use for the pools and file systems?  From reading
the best practices guides [1], [2], it seems that I cannot have the
root file system on a RAID-5 pool, but it has to be a separate storage
pool.  This seems to be slightly at odds with the suggestion of using
whole-disks for ZFS, not just slices/partitions.



The reason for using a whole disk is that ZFS will turn on the drive's
cache. When using slices, the cache is normally disabled. If all
slices are using ZFS, you can turn the drive cache back on. I don't
think it happens by default right now, but you can set it manually.

  
As I recall, using whole disk as zfs also change the disk label to EFI. 
Meaning, you can't boot from it.



Another alternative is to use an IDE to Compact Flash adapter, and
boot off of flash.

Just curious, what will that flash contain?
e.g. will it be similar to linux's /boot, or will it contain the full 
solaris root?

How do you manage redundancy (e.g. mirror) for that boot device?


My plan right now is to create a 20 GB and a 720 GB slice on each
disk, then create two storage pools, one RAID-1 (20 GB) and one RAID-5
(1.440 TB).  Create the root, var, usr and opt file systems in the
first pool, and home, library and photos in the second.  


Good plan.

I hope I
won't need swap, but I could create three 1 GB slices (one on each
disk) for that.



If you have enough memory (say 4gb) you probably won't need swap. I
believe swap can live in a ZFS pool now too, so you won't necesarily
need another slice. You'll just have RAID-Z protected swap.
  

Really? I think solaris still needs non-zfs swap for default dump device.

Regards,

Fajar


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-10 Thread Darren J Moffat
Fajar A. Nugraha wrote:
 If you have enough memory (say 4gb) you probably won't need swap. I
 believe swap can live in a ZFS pool now too, so you won't necesarily
 need another slice. You'll just have RAID-Z protected swap.
   
 Really? I think solaris still needs non-zfs swap for default dump device.

No longer true, you can swap and dump to a ZVOL (but not the same one). 
  This change came in after OpenSolaris 2008.05 LiveCD/Install was cut 
so it doesn't take advantage of that.   There was a big long thread 
cross posted to this list about it just recently.

The current SX:CE installer (ie Nevada) uses ZVOL for swap and dump.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-10 Thread Florin Iucha
On Thu, Jul 10, 2008 at 12:47:26AM -0700, Ross wrote:
 My recommendation:  buy a small, cheap 2.5 SATA hard drive (or 1.8 SSD) and 
 use that as your boot volume, I'd even bolt it to the side of your case if 
 you have to.  Then use the whole of your three large disks as a raid-z set.

Yup, I'm going with 4GB of mirrored flash for root/var/usr and I'll keep
the main spindles only for data.

 If I were in your shoes I would also have bought 4 drives for ZFS instead of 
 3, and gone for raid-z2.  

No room - Antec NSK-2440 - and too much power draw.  My server idles at
57-64 W (under Linux) and I'd like to keep it that way.

 And finally, I don't know how much room you have in your current case, but if 
 you're ever looking for one that takes more drives I can highly recommend the 
 Antec P182.  I've got 6x 1TB drives in my home server and in that case it's 
 so quiet I can't even hear it turn on.  My watch ticking easily drowns out 
 this server.

Heh - I do have the P180 as my workstation case.  But I don't have
that much room for servers 8^)

 PS.  If you're going to be using CIFS, avoid build 93.

Can you please give a link to the discussion, or a bug id?

Thanks,
florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpmP7PaXForT.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-10 Thread Richard Elling
Fajar A. Nugraha wrote:
 Brandon High wrote:
 Another alternative is to use an IDE to Compact Flash adapter, and
 boot off of flash.
 Just curious, what will that flash contain?
 e.g. will it be similar to linux's /boot, or will it contain the full 
 solaris root?
 How do you manage redundancy (e.g. mirror) for that boot device?


zfs set copies=2 :-)


hmm... I need to dig up my notes on that and blog it...
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-09 Thread Florin Iucha
Hello,

I plan to use (Open)Solaris for a home file server.  I wanted cool and
quiet hardware, so I picked a mini-atx motherboard and case, an AMD64
CPU and 4 GB of RAM.  My case has room for three hard drives and I
have chosen 3x WD 750 Green Power hard drives.  The file server will
serve out via NFS and Samba the home directories, the library
(collected articles and books in PDF format) and the photo archive
(150GB and growing of photos in RAW format ~ 7-9MB/file).

I cannot use OpenSolaris 2008.05 since it does not recognize the SATA
disks attached to the southbridge. A fix for this problem went into
build 93.  I will use SXCE 93 (for the SATA fix) or SXCE 94 (for the
last revision of the ZFS format).

In order to make the maximum amount of space available for the photos,
I plan to use RAID-5 for that pool.  Also, I would like to have
sufficient redundancy so if a drive goes bad, I can just replace it
and the volume manger/file system will take care of fixing itself
back.

The question is, how should I partition the drives, and what tuning
parameters should I use for the pools and file systems?  From reading
the best practices guides [1], [2], it seems that I cannot have the
root file system on a RAID-5 pool, but it has to be a separate storage
pool.  This seems to be slightly at odds with the suggestion of using
whole-disks for ZFS, not just slices/partitions.

My plan right now is to create a 20 GB and a 720 GB slice on each
disk, then create two storage pools, one RAID-1 (20 GB) and one RAID-5
(1.440 TB).  Create the root, var, usr and opt file systems in the
first pool, and home, library and photos in the second.  I hope I
won't need swap, but I could create three 1 GB slices (one on each
disk) for that.

Does this sound like a good configuration?

Will the SXCE 9[34] installer allow me to create the above setup?

Should I pass any special parameters to the zfs pool and file system
creation tool to get the best performance?  home and library contains
files between few KB and a fer MB.  photos contains file roughly 7 to
9 MB.  Should I place those on separate pools?

Note: the hardware is committed (i.e. I already have it), so I am not
inclined to deviate from it 8^)

Thanks,
florin

1: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
2: http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgp9qLioSeY7W.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-09 Thread Brandon High
On Wed, Jul 9, 2008 at 3:37 PM, Florin Iucha [EMAIL PROTECTED] wrote:
 The question is, how should I partition the drives, and what tuning
 parameters should I use for the pools and file systems?  From reading
 the best practices guides [1], [2], it seems that I cannot have the
 root file system on a RAID-5 pool, but it has to be a separate storage
 pool.  This seems to be slightly at odds with the suggestion of using
 whole-disks for ZFS, not just slices/partitions.

The reason for using a whole disk is that ZFS will turn on the drive's
cache. When using slices, the cache is normally disabled. If all
slices are using ZFS, you can turn the drive cache back on. I don't
think it happens by default right now, but you can set it manually.

Another alternative is to use an IDE to Compact Flash adapter, and
boot off of flash. I'll be building a media server once we move, and
that system will boot from flash. You can also boot from USB keys, but
USB under OpenSolaris seems to be iffy.

Here's the component list that I'm planning to use right now:
http://secure.newegg.com/WishList/PublicWishDetail.aspx?Source=MSWDWishListNumber=7739092

I *may* change it and boot off another drive that is not part of the
RAID-Z pool.

 My plan right now is to create a 20 GB and a 720 GB slice on each
 disk, then create two storage pools, one RAID-1 (20 GB) and one RAID-5
 (1.440 TB).  Create the root, var, usr and opt file systems in the
 first pool, and home, library and photos in the second.  I hope I
 won't need swap, but I could create three 1 GB slices (one on each
 disk) for that.

 Does this sound like a good configuration?

If you have enough memory (say 4gb) you probably won't need swap. I
believe swap can live in a ZFS pool now too, so you won't necesarily
need another slice. You'll just have RAID-Z protected swap.

I built a Linux-based NAS a few years back using an almost identical
scheme and wound up regretting it. In the future I would install the
system on a completely separate disk or group of disks than the shared
pool.

 Should I pass any special parameters to the zfs pool and file system
 creation tool to get the best performance?  home and library contains
 files between few KB and a fer MB.  photos contains file roughly 7 to
 9 MB.  Should I place those on separate pools?

You shouldn't need to do anything. If you want to set the block size,
or enable or disable compression, etc. you can create multiple
filesystems in your pool rather than multiple pools.

 Note: the hardware is committed (i.e. I already have it), so I am not
 inclined to deviate from it 8^)

You might want to look at a 4 or 8 port SATA adapter rather than wait
for the southbridge fixes.

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-09 Thread Florin Iucha
On Wed, Jul 09, 2008 at 06:02:24PM -0700, Brandon High wrote:
 On Wed, Jul 9, 2008 at 3:37 PM, Florin Iucha [EMAIL PROTECTED] wrote:
 The reason for using a whole disk is that ZFS will turn on the drive's
 cache. When using slices, the cache is normally disabled. If all
 slices are using ZFS, you can turn the drive cache back on. I don't
 think it happens by default right now, but you can set it manually.

Aha! Good to know.

 Another alternative is to use an IDE to Compact Flash adapter, and
 boot off of flash. I'll be building a media server once we move, and
 that system will boot from flash. You can also boot from USB keys, but
 USB under OpenSolaris seems to be iffy.
 
 Here's the component list that I'm planning to use right now:
 http://secure.newegg.com/WishList/PublicWishDetail.aspx?Source=MSWDWishListNumber=7739092

That adapter won't work for me, since I have a single IDE port, and I
need to use the DVD to install the OS and maybe to run some backups.

However, this looks interesting:

   http://www.addonics.com/products/flash_memory_reader/ad2sahdcf.asp

as it has hardware mirroring.  Not sure what the error reporting
through the OS is though..., but I hope I don't have to find out.

For the Compact Flash I would spring for the industrial grade:

   
http://www.hitechvendors.com/showproduct.aspx?ProductID=4885SEName=transcend-4gb-100x-industrial-cf-card-udma4-mode

  My plan right now is to create a 20 GB and a 720 GB slice on each
  disk, then create two storage pools, one RAID-1 (20 GB) and one RAID-5
  (1.440 TB).  Create the root, var, usr and opt file systems in the
  first pool, and home, library and photos in the second.  I hope I
  won't need swap, but I could create three 1 GB slices (one on each
  disk) for that.
 
 I built a Linux-based NAS a few years back using an almost identical
 scheme and wound up regretting it. In the future I would install the
 system on a completely separate disk or group of disks than the shared
 pool.

This is the current Linux-based NAS and I'm not happy with its
performance, either.

  Note: the hardware is committed (i.e. I already have it), so I am not
  inclined to deviate from it 8^)
 
 You might want to look at a 4 or 8 port SATA adapter rather than wait
 for the southbridge fixes.

I like the southbridge since it sits on the PCI express bus.  The PCI bus
is limited to 133 MB/s, which divided by 3 (disks) means 35-40 MB/s
(including overhead) writes.  And good quality PCI-express add-on
controllers with Solaris drivers are quite expensive.

Cheers,
florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpWxnTi94UBz.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss