[zfs-discuss] Space usage

2011-08-14 Thread Lanky Doodle
I'm just uploading all my data to my server and the space used is much more 
than what i'm uploading;

Documents = 147MB
Videos = 11G
Software=  1.4G

By my calculations, that equals 12.547T, yet zpool list is showing 21G as being 
allocated;

NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
dpool  27.2T  21.2G  27.2T 0%  1.00x  ONLINE  -

It doesn't look like any snapshots have been taken, according to zfs list -t 
snapshot. I've read about the 'copies' parameter but I didn't specify this when 
creating filesystems and I guess the default is 1?

Any ideas?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Space usage

2011-08-14 Thread Lanky Doodle
Thanks fj.

Should have realized that when it showed 27T available, which is the raw total 
size before raid-z2!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Indexing - Windows 7

2011-08-13 Thread Lanky Doodle
Hiya,

I am trying to add shares to my Win7 libraries but Windows won't let me add 
them due to them not being indexed.

Does S11E have any server-side indexing feature?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ACLs and Windows

2011-08-12 Thread Lanky Doodle
Hiya,

My S11E server is needed to serve Windows clients. I read a while ago (last 
year!) about 'fudging' it so that Everyone has read/write access.

Is it possible for me to lock this down to users? I only have a single user on 
my Windows clients and in some case (htpc) this user is logged on automatically.

So could I map a Windows user with a Solaris user (matching credentials) and 
only give (owner) access to my ZFS filesystems to this user?

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk IDs and DD

2011-08-10 Thread Lanky Doodle
Oh no I am not bothered at all about the target ID numbering. I just wondered 
if there was a problem in the way it was enumerating the disks.

Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a 
parameter of the command or the slice of a disk - none of my 'data' disks have 
been 'configured' yet. I wanted to ID them before adding them to pools.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk IDs and DD

2011-08-10 Thread Lanky Doodle
Thanks Andrew, Fajar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Scripting

2011-08-10 Thread Lanky Doodle
Hiya,

Now I have figured out how to read disks using dd to make LEDs blink, I want to 
write a little script that iterates through all drives, dd's them with a few 
thousand counts, stop, then dd's them again with another few thousand counts, 
so I end up with maybe 5 blinks.

I don't want somebody to write something for me, I'd like to be pointed in the 
right direction so I can build one myself :)

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Disk IDs and DD

2011-08-09 Thread Lanky Doodle
Hiya,

Is there any reason (and anything to worry about) if disk target IDs don't 
start at 0 (zero). For some reason mine are like this (3 controllers - 1 
onboard and 2 PCIe);

AVAILABLE DISK SELECTIONS:
   0. c8t0d0 ATA-ST9160314AS-SDM1 cyl 19454 alt 2 hd 255 sec 63
  /pci@0,0/pci10de,cb84@5/disk@0,0
   1. c8t1d0 ATA-ST9160314AS-SDM1 cyl 19454 alt 2 hd 255 sec 63
  /pci@0,0/pci10de,cb84@5/disk@1,0
   2. c9t7d0 ATA-HitachiHDS72302-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@7,0
   3. c9t8d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@8,0
   4. c9t9d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@9,0
   5. c9t10d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@a,0
   6. c9t11d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@b,0
   7. c9t12d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@c,0
   8. c9t13d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@d,0
   9. c9t14d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@e,0
  10. c10t8d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@8,0
  11. c10t9d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@9,0
  12. c10t10d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@a,0
  13. c10t11d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@b,0
  14. c10t12d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@c,0
  15. c10t13d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@d,0
  16. c10t14d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@e,0

So apart from the onboard controller, the tx (where x is the number) doesn't 
start at 0.

Also, I am trying to make disk LEDs blink by using dd so I can match up disks 
in Solaris to the physical slot, but I can't work out the right command;

admin@ok-server01:~# dd if=/dev/dsk/c9t7d0 of=/dev/null
dd: /dev/dsk/c9t7d0: open: No such file or directory

admin@ok-server01:~# dd if=/dev/rdsk/c9t7d0 of=/dev/null
dd: /dev/rdsk/c9t7d0: open: No such file or directory

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mirrored rpool

2011-08-08 Thread Lanky Doodle
Hiya,

I am using S11E Live CD to install. The install wouldn't let me select 2 disks 
for a mirrored rpool so I done this post-install using this guide;

http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html

Before I go ahead and continue building my server (zpools) I want to make sure 
the above guide is correct for S11E?

The mirrored rpool seems to look OK but want to make sure there's nothing else 
to do.

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding disks [was: # disks per vdev]

2011-07-06 Thread Lanky Doodle
Thanks Trond.

I am aware of this, but to be honest I will not be upgrading very often (my 
current WHS setup has lasted 5 years without a single change!) and certainly 
not to each iteration of TB size increase, so by the time I do upgrade, say in 
the next 5 years PCIe will have probably been replaced, or got to revision 10.0 
or something stupid!

And anyway, my current motherboard (expensive server board) is only PCIe 1.0 so 
I wouldn't get the benefit of having a PCIe 2.0 card.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding disks [was: # disks per vdev]

2011-07-06 Thread Lanky Doodle
 The testing was utilizing a portion of our drives, we
 have 120 x 750
 SATA drives in J4400s dual pathed. We ended up with
 22 vdevs each a
 raidz2 of 5 drives, with one drive in each of the
 J4400, so we can
 lose two complete J4400 chassis and not lose any
 data.

Thanks pk.

You know I never thought about doing 5 drive z2's. That would be an a 
acceptable compromise for me between 2x 7 drive z2's as;

1) resilver times should be faster
2) 5 drive groupings, matching my 5 drive caddies
3) only losing 2TB usable against 2x 7 drive z2's
4) IOPS should be faster
5) if and when I scale up, I can add another 5 drives, in another 5 drive caddy

Super!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding disks [was: # disks per vdev]

2011-07-05 Thread Lanky Doodle
OK, I have finally settled on hardware;

2x LSI SAS3081E-R controllers
2x Seagate Momentus 5400.6 rpool disks
15x Hitachi 5K3000 'data' disks

I am still undecided as to how to group the disks. I have read elsewhere that 
raid-z1 is best suited with either 3 or 5 disks and raid-z2 is better suited 
with 6 or 10 disks - is there any truth in this, although I think this was in 
reference to 4K sector disks;

3x 5 drive z1 = 24t usable
2x 6 drive z2 = 16t usable

keeping to those recommendations or

2x 7 disk z2 = 20t usable with 1 cold/warm/hot spare

as per my original idea.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding disks [was: # disks per vdev]

2011-07-05 Thread Lanky Doodle
Thanks.

I ruled out the SAS2008 controller as my motherboard is only PCIe 1.0 so would 
not have been able to make the most of the difference in increased bandwidth.

I can't see myself upgrading every few months (my current WHZ build has lasted 
over 4 years without a single change) so by the time I do come to upgrade PCIe 
will probably be obselete!!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 512b vs 4K sectors

2011-07-04 Thread Lanky Doodle
Hiya,

I''ve been doing a lot of research surrounding this and ZFS, including some 
posts on here, though I am still left scratching my head.

I am planning on using slow RPM drives for a home media server, and it's these 
that seem to 'suffer' from a few problems;

Seagate Barracuda LP - Looks to be the only true 512b sector hard disk. Serious 
firmware issues
Western Digital Cavier Green - 4K sectors = crap write performance
Hitachi 5K3000 - Variable sector sizing (according to tech. specs)
Samsung SpinPoint F4 - Just plain old problems with them

What is the best drive of the above 4, and are 4K drives really a no-no with 
ZFS. Are there any alternatives in the same price bracket?

Who would have thought choosing a hard disk could be so 'hard'!

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding disks [was: # disks per vdev]

2011-06-23 Thread Lanky Doodle
Sorry to pester, but is anyone able to say if the Marvell 9480 chip is now 
supported in Solaris?

The article I read saying it wasn't supported was dated May 2010 so over a year 
ago.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding disks [was: # disks per vdev]

2011-06-21 Thread Lanky Doodle
Thanks for all the replies.

I have a pretty good idea how the disk enclosure assigns slot locations so 
should be OK.

One last thing - I see thet Supermicro has just released a newer version of the 
card I mentioned in the first post that supports SATA 6Gbps. From what I can 
see it uses the Marvell 9480 controller, which I don't think is supported in 
Solaris Express 11 yet.

Does this mean it strictly won't work (ie no available drivers) or that it just 
wouldn't be supported if there's problems?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-17 Thread Lanky Doodle
 1 - are the 2 vdevs in the same pool, or two separate
 pools?
 
I was planning on having the 2 z2 vdevs in one pool. Although having 2 pools 
and having them sync'd sounds really good, I fear it may be overkill for the 
intended purpose.

 
 
 3 - spare temperature
 
 for levels raidz2 and better, you might be happier
 with a warm spare
 and manual replacement, compared to overly-aggressive
 automated
 replacement if there is a cascade of errors.  See
 recent threads.
 
 You may also consider a cold spare, leaving a drive
 bay free for
 disks-as-backup-tapes swapping.  If you replace the
 1Tb's now,
 repurpose them for this rather than reselling.  
 
I have considered this. The fact I am using cheap disks inevitably means they 
will fail sooner and more often than enterprise equivalents so the hot spare 
may be need to be over-used.

Could I have different sized vdevs and still have them both in one pool - i.e. 
an 8 disk z2 vdev and a 7 disk z2 vdev.

 
 4 - the 16th port
 
 Can you find somewhere inside the case for an SSD as
 L2ARC on your
 last port?  Could be very worthwhile for some of your
 other data and
 metadata (less so the movies).

Yes! I have 10 5.1/4 drive bays in my case. 9 of them are occupied by the 
5-in-3 hot swop caddies leaving 1 bay left. I was planning on using one of 
these 
http://www.scan.co.uk/products/icy-dock-mb994sp-4s-4in1-sas-sata-hot-swap-backplane-525-raid-cage
 in the drive bay and having 2x 2.5 SATA drives mirrored for the root pool, 
leaving 2 drive bays spare.

For the mirrored root pool I was going to use 2 of the 6 motherboard SATA II 
ports so they are entirely seperate to the 'data' controllers. So I could 
either use the 16th port on the Supermicro controllers for an SSD or one of the 
remaining motherboard ports.

What size would you recommend for the L2ARC disk. I ask as I have a 72GB SAS 
10k disk spare so could use this for now (being faster than SATA), but it would 
have to be on the Supermicro card as this also supports SAS drives. SSD's are a 
bit out of range price wise at the moment so i'd wait to use one. Also ZFS 
doesn't support TRIM yet does it?

Thank you for you excellent post! :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-17 Thread Lanky Doodle
Thanks Richard.

How does ZFS enumerate the disks? In terms of listing them does it do them 
logically, i.e;

controller #1 (motherboard)
|
|--- disk1
|--- disk2
controller #3
|--- disk3
|--- disk4
|--- disk5
|--- disk6
|--- disk7
|--- disk8
|--- disk9
|--- disk10
controller #4
|--- disk11
|--- disk12
|--- disk13
|--- disk14
|--- disk15
|--- disk16
|--- disk17
|--- disk18

or is it completely random leaving me with some trial and error to work out 
what disk is on what port?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-17 Thread Lanky Doodle
I was planning on using one of
 these
 http://www.scan.co.uk/products/icy-dock-mb994sp-4s-4in
 1-sas-sata-hot-swap-backplane-525-raid-cage

Imagine if 2.5 2TB disks were price neutral compared to 3.5 equivalents.

I could have 40 of the buggers in my system giving 80TB raw storage! I'd 
happily use mirrors all the way in that scenario
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-17 Thread Lanky Doodle
 4 - the 16th port
 
 Can you find somewhere inside the case for an SSD as
 L2ARC on your
 last port?

Although saying that, if we are saying hot spares may be bad in my scenario, I 
could ditch it and use an 3.5 SSD in the 15th drive's place?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-16 Thread Lanky Doodle
Thanks guys.

I have decided to bite the bullet and change to 2TB disks now rather than go 
through all the effort using 1TB disks and then maybe changing in 6-12 months 
time or whatever. The price difference between 1TB and 2TB disks is marginal 
and I can always re-sell my 6x 1TB disks.

I think I have also narrowed down the raid config to these 4;

2x 7 disk raid-z2 with 1 hot spare - 20TB usable
3x 5 disk raid-z2 with 0 hot spare - 18TB usable
2x 6 disk raid-z2 with 2 hot spares - 16TB usable

with option 1 probably being preferred at the moment.

I am aware that bad batches of disks do exist so I tend to either a) buy them 
in sets from different suppliers or b) use different manufacturers. How 
sensitive to different disks is ZFS, in terms of disk features (NCQ, RPM speed, 
firmware/software versions, cache etc).

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-15 Thread Lanky Doodle
Thanks Edward.

In that case what 'option' would you choose - smaller raid-z vdevs or larger 
raid-z2 vdevs.

I do like the idea of having a hot spare so 2x 7 disk raid-z2 may be the better 
option rather than 3x 5 disk raid-z with no hot spare. 2TB loss in the former 
could be acceptable I suppose for the sake of better protection. When 4-5TB 
drives come to market 2-3TB drives will drop in price so I could always upgrade 
them - can you do this with raid-z vdevs, in terms of autoexand?

There might be the odd deletion here and there if a movie is truly turd, but as 
you say 99% of the time it will be written and left.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-15 Thread Lanky Doodle
That's how I understood autoexpand, about not doing so until all disks have 
been done.

I do indeed rip from disc rather than grab torrents - to VIDEO_TS folders and 
not ISO - on my laptop then copy the whole folder up to WHS in one go. So while 
they're not one large single file, they are lots of small .vob files, but being 
written in one hit.

This is a bit OT, but can you have one vdev that is a duplicate of another 
vdev? By that I mean say you had 2x 7 disk raid-z2 vdevs, instead of them both 
being used in one large pool could you have one that is a backup of the other, 
allowing you to destroy one of them and re-build without data loss?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] # disks per vdev

2011-06-14 Thread Lanky Doodle
Hiya,

I am just in the planning stages for my ZFS Home Media Server build at the 
moment (to replace WHS v1).

I plan to use 2x motherboard ports and 2x Supermicro AOC-SASLP-MV8 8 port SATA 
cards to give 17* drive connections; 2 disks (120GB SATA 2.5) will be used for 
the ZFS install using the motherboard ports and the remaing 15 disks (1TB SATA) 
will be used for data using the 2x 8 port cards.

* = the total number of ports is 18 but I only have enough space in the chassis 
for 17 drives (2x 2.5 in 1x 3.5 bay and 15x 3.5 by using 5-in-3 hotswop 
caddies in 9x 5.1/4 bays).

All disks are 5400RPM to keep power requirements down.

The ZFS install will be mirrored, but I am not sure how to configure the 15 
data disks from a performance (inc. resilvering) vs protection vs usable space 
perspective;

3x 5 disk raid-z. 3 disk failures in the right scenario, 12TB storage
2x 7 disk raid-z + hot spare. 2 disk failures in the right scenario, 12TB 
storage
1x 15 disk raid-z2. 2 disk failures, 13TB storage
2x 7 disk raid-z2 + hot spare. 4 disk failures in the right scenario, 10TB 
storage

Without having a mash of different raid-z* levels I can't think of any other 
options.

I am leaning towards the first option as it gives seperation between all the 
disks; I would have seperate Movie folders on each of them while having 
critical data (pictures, home videos, documents etc) stored on each set of 
raid-z.

Suggestions welcomed.

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-14 Thread Lanky Doodle
Thanks Edward.

I'm in two minds with mirrors. I know they provide the best performance and 
protection, and if this was a business critical machine I wouldn't hesitate.

But as it for a home media server, which is mainly WORM access and will be 
storing (legal!) DVD/Bluray rips i'm not so sure I can sacrify the space.

7x 2 way mirrors would give me 7TB usable with 1 hot spare, using 1TB disks, 
which is a big drop from 12TB! I could always jump to 2TB disks giving me 14TB 
usable but I already have 6x 1TB disks in my WHS build which i'd like to re-use.

Hmmm!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-14 Thread Lanky Doodle
Thanks martysch.

That is what I meant about adding disks to vdevs - not adding disks to vdevs 
but adding vdevs to pools.

If the geometry of the vdevs should ideally be the same, it would make sense to 
buy one more disk now and have a 7 disk raid-z2 to start with, then buy disks 
as and when and create a further 7 disk raid-z2 leaving the 15th disk as a hot 
spare. Would 'only' give 10TB usable though.

The only thing though I seem to remember reading that adding vdevs to pools way 
after the creation of the pool and data had been written to it, that things 
aren't spread evenly - is that right? So it might actually make sense to buy 
all the disks now and start fresh with the final build.

Starting with only 6 disks would leave growth for another 6 disk raid-z2 (to 
keep matching geometry) leaving 3 disks spare which is not ideal.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-21 Thread Lanky Doodle
 It's worse on raidzN than on mirrors, because the
 number of items which must
 be read is higher in radizN, assuming you're using
 larger vdev's and
 therefore more items exist scattered about inside
 that vdev.  You therefore
 have a higher number of things which must be randomly
 read before you reach
 completion.

In that case, isn't the answer to have a dedicated parity disk (or 2 or 3 
depending on what raidz* is used), ala raid-dp. Wouldn't this effectively be 
the 'same' as a mirror when resilvering (the only difference being parity vs 
actual data), as it's doing so from a single disk.

raid-dp covers the parity disk from failure so raidz1 probably wouldn't be 
sensible as if the parity disk fails.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-20 Thread Lanky Doodle
Thanks Edward.

I do agree about mirrored rpool (equivalent to Windows OS volume); not doing it 
goes against one of my principles when building enterprise servers.

Is there any argument against using the rpool for all data storage as well as 
being the install volume?

Say for example I chucked 15x 1TB disks in there and created a mirrored rpool 
during installation, using 2 disks. If I added another 6 mirrors (12 disks) to 
it that would give me an rpool of 7TB. The 15th disk being a spare.

Or, say I selected 3 disks during install, does this create a 3 way mirrored 
rpool or does it give you the option of creating raidz? If so, I could then 
create a further 4x 3 drive raidz's, giving me a 10TB rpool.

Or, I could use 2 smaller disks (say 80GB) for the rpool, then create 4x 3 
drive raidz's, giving me an 8TB rpool. Again this gives me a spare disk.

Either of these 3 should keep resilvering times to a minimum, against say one 
big raidz2 of 13 disks.

Why does resilvering take so long in raidz anyway?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-20 Thread Lanky Doodle
Oh, does anyone know if resilvering efficiency is improved or fixed in Solaris 
11 Express, as that is what i'm using.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-20 Thread Lanky Doodle
 I believe Oracle is aware of the problem, but most of
 the core ZFS team has left. And of course, a fix for
 Oracle Solaris no longer means a fix for the rest of
 us.

OK, that is a bit concerning then. As good as ZFS may be, i'm not sure I want 
to committ to a file system that is 'broken' and may not be fully fixed, if at 
all.

Hmnnn...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-20 Thread Lanky Doodle
Thanks relling.

I suppose at the end of the day any file system/volume manager has it's flaws 
so perhaps it's better to look at the positives of each and decide based on 
them.

So, back to my question above, is there a deciding argument [i]against[/i] 
putting data on the install volume (rpool). Forget about mirroring for a sec;

1) Select 3 disks during install creating raidz1. Create a further 4x 3 drive 
raidz1's, giving me a 10TB rpool with no spare disks

2) Select 5 disks during install creating raidz1. Create a further 2x 5 drive 
raidsz1's giving me a 12TB rpool with no spare disks

3) Select 7 disks during install creating raidz1. Create a further 7 drive 
raidz1 giving me 12TB rpool with 1 spare disk

As there is no space gain between 2) and 3) there is no point going for 3), 
other than having a spare disk, but resilver times would be slower.

So it becomes between 1) and 2). Neither offer spare disks but 1) would offer 
faster resilver times with upto 5 simultaneous disk failures and 2) would offer 
2TB extra space with upto 3 simultaneous disk failures.

FYI, I am using Samsung SpinPoint F2's, which have the variable RPM speeds 
(http://www.scan.co.uk/products/1tb-samsung-hd103si-ecogreen-f2-sata-3gb-s-32mb-cache-89-ms-ncq)

I may wait at least until I get the next 4 drives in (I actually have 6 at the 
mo, not 5) taking me to 10, before migrating to ZFS so plenty of time to think 
about it and hopefully time for them to fix resilvering! ;-)

Thanks again...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-18 Thread Lanky Doodle
On the subject of where to install ZFS, I was planning to use either Compact 
Flash or USB drive (both of which would be mounted internally); using up 2 of 
the drive bays for a mirrored install is possibly a waste of physical space, 
considering it's a) a home media server and b) the config can be backed up to a 
protected ZFS pool - if the CF or USB drive failed I would just replace and 
restore the config.

Can you have an equivalent of a global hot spare in ZFS. If I did go down the 
mirror route (mirror disk0 disk1 mirror disk2 disk3 mirror disk4 disk5 etc) all 
the way up to 14 disks that would leave the 15th disk spare.

Now this is getting really complex, but can you have server failover in ZFS, 
much like DFS-R in Windows - you point clients to a clustered ZFS namespace so 
if a complete server failed nothing is interrupted.

I am still undecided as to mirror vs RAID Z. I am going to be ripping 
uncompressed Blu-Rays so space is vital. I use RAID DP in NetApp kit at work 
and I'm guessing RAID Z2 is the equivalent? I have 5TB space at the moment so 
going to the expense of mirroring for only 2TB extra doesn't seem much of a pay 
off.

Maybe a compromise of 2x 7-disk RAID Z1 with global hotspare is the way to go?

Put it this way, I currently use Windows Home Server, which has no true disk 
failure protection, so any of ZFS's redundancy schemes is going to be a step 
up; is there an equivalent system in ZFS where if 1 disk fails you only lose 
that disks data, like unRAID?

Thanks everyone for your input so far :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-17 Thread Lanky Doodle
Thanks for all the replies.

The bit about combining zpools came from this command on the southbrain 
tutorial;

zpool create mail \
 mirror c6t600D0230006C1C4C0C50BE5BC9D49100d0 
c6t600D0230006B66680C50AB7821F0E900d0 \
 mirror c6t600D0230006B66680C50AB0187D75000d0 
c6t600D0230006C1C4C0C50BE27386C4900d0

I admit I was getting confused between zpools and vdevs, thinking in the above 
command that each mirror was a zpool and not a vdev.

Just so i'm correct, a normal command would like like

zpool create mypool raidz disk1 disk2 disk3 disk4 disk5

which would result in a zpool called my pool, which is made up of a 5 disk 
raidz vdev? This means that zpools don't actually 'contain' physical devices, 
which is what I originally thought.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-17 Thread Lanky Doodle
OK cool.

One last question. Reading the Admin Guid for ZFS, it says:

[i]A more complex conceptual RAID-Z configuration would look similar to the 
following:

raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0 c6t0d0 c7t0d0 raidz c8t0d0 c9t0d0 
c10t0d0 c11t0d0 c12t0d0 c13t0d0 c14t0d0
If you are creating a RAID-Z configuration with many disks, as in this example, 
a RAID-Z configuration with 14 disks is better split into a two 7-disk 
groupings. RAID-Z configurations with single-digit groupings of disks should 
perform better[/i]

This is relevant as my final setup was planned to be 15 disks, so only one more 
than the example.

So, do I drop one disk and go with 2 7 drive vdevs, or stick to 3 5 drive vdevs.

Also, does anyone have anything to add re the security of CIFS when used with 
Windows clients?

Thanks again guys, and gals...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-17 Thread Lanky Doodle
Thanks!

By single drive mirrors, I assume, in a 14 disk setup, you mean 7 sets of 2 
disk mirrors - I am thinking of traditional RAID1 here.

Or do you mean 1 massive mirror with all 14 disks?

This is always a tough one for me. I too prefer RAID1 where redundancy is king, 
but the trade off for me would be 5GB of 'wasted' space - total of 7GB in 
mirror and 12GB in 3x RAIDZ.

Decisions, decisions.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] A few questions

2010-12-16 Thread Lanky Doodle
Hiya,

I have been playing with ZFS for a few days now on a test PC, and I plan to use 
if for my home media server after being very impressed!

I've got the basics of creating zpools and zfs filesystems with compression and 
dedup etc, but I'm wondering if there's a better way to handle security. I'm 
using Windows 7 clients by the way.

I have used this 'guide' to do the permissions - http://www.slepicka.net/?p=37

Also, at present I have 5x 1TB drives to use in my home server so I plan to 
create a RAID-Z1 pool which will have my shares on it (Movies, Music, Pictures 
etc). I then plan to increase this in sets of 5 (so another 5x 1TB drives in 
Jan and nother 5 in Feb/March so that I can avoid all disks being from the same 
batch). I did plan on creating seperate zpoolz with each set of 5 drives;

drives 1-5 volume0 zpool
drives 6-10 volume1 zpool
drives 11-15 volume2 zpool

so that I can sustain 3 simultaneous drives failures, as long as it's one drive 
from each set. However I think this will mean each zpool will have independant 
shares which I don't want. I have used this guide - 
http://southbrain.com/south/tutorials/zpools.html - which says you can combine 
zpools into a 'parent' zpool, but can this be done in my scenario (staggered) 
as it looks like the child zpools have to be created before the parent is done. 
So basically I'd need to be able to;

Create volume0 zpool now
Create volume1 zpool in Jan, then combine volume0 and volume1 into a parent 
zpool
Create volume2 in Feb/March and add to parent zpool

I know I could just add each disk to volume0 zpool but I've read it's a bugger 
to do and that creating seperate zpools with news disks is a much better way to 
go.

I think that's it for now. Sorry for the mammoth first post!

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-16 Thread Lanky Doodle
Thanks for the reply.

In that case, wouldn't it be better to, as you say, start with a 6 drive Z2, 
then just keep adding drives until the case is full, for a single Z2 zpool?

Or even Z3, if that's available now?

I have an 11x 5.1/4 bay case, with 3x 5-in-3 hot swap caddies giving me 15 
drive bays. Hence the plan to start with 5, then 10, then all the way to 15.

This seems a more logical (and cheaper) solution than keep replacing with 
bigger drives as they come to market.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss