Re: [zfs-discuss] Confused about consumer drives and zfs can someone help?

2010-07-23 Thread tomwaters
There is alot there to reply to...but I will try and help...

Re. TLER. Do not worry about TLER when using ZFS. ZFS will handle it either way 
and will NOT time out and drop the drive...it may wait a long time, but it will 
not time out and drop the drive - nor will it have an issue if you do enable 
TLER-ON (which sets time out to 7 seonds).  I run both with TLER-ON (disks from 
an old mdadm raid array) and without (TLER-OFF) 1.5TB WDEADS.

I can not speak for the WD EARS, but the WDEADS are fine in my home nas. I also 
run 1.5TB Samsung Green/Silencer series and Seagate 11's. Others swear by 
Hitachi. I would recommend the Samsung or Hitachi and not the new WD EARS which 
have that 4k sectors or whatever it is.

Re the CPU, do not go low power Atom etc, go a newish Core2 duo...the power 
differential at idle is bugger all and when you want to use the nas, ZFS will 
make good use of the CPU. Honestly, sit down and do the calculations on power 
savings of a low power cpu and you'll see it's better to just not have that 5th 
beer on a friday - you'll save more money that way and be MUCH happier with 
your nas performance.

re. cards...I use and recommend these 8-Port SUPERMICRO AOC-USASLP-L8I UIO SAS. 
They are cheap on e-bay, just work and are fast. Use them.

You do want alot of ram I use 8GB, but you can use 4. Ram is cheap, ZFS loves 
ram, just buy 8.

IMHO (and that of the best practice guide) - you should mirror the rpool (o/s 
disk). Just buy 2 cheap laptop drives and when installing choose to mirror 
them.

I hope that helps.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS performance?

2010-07-23 Thread tomwaters
I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of 
95-105MB and NFS of 5-20MB.

Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI 
target...as I am getting 2-5MB on that too.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] physically removed a pool - how to I tell it to forget the pool?

2010-07-23 Thread tomwaters
Hi guys, I physically removed disks from a pool without offlining the pool 
first...(yes I know) anyway I now want to delete/destroy the pool...but zpool 
destroy -f  dvr says can not open 'dvr':no such pool

I can not offline it or delete it!

I want to reuse the name dvr but how do I do this?

ie. how do I totally delete a pool who's disks have been physically removed?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] physically removed a pool - how to I tell it to forget the pool?

2010-07-23 Thread tomwaters
It was fine on the reboot...so even though zfs destroy threw up the errors, it 
did remove them...just needed a reboot to refresh/remove
 it in the zpool list.

thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Performance advantages of spool with 2x raidz2 vdevs vs. Single vdev

2010-07-19 Thread tomwaters
Hi guys, I am about to reshape my data spool and am wondering what performance 
diff. I can expect from the new config. Vs. The old.

The old config. Is a pool of a single vdev of 8 disks raidz2.
The new pool config is 2vdev's of 7 disk raidz2 in a single pool.

I understand it should be better with higher io throughputand better 
read/write rates...but interested to hear the science behind it.

I have googles and read the ifs best practice guide and evil tuning, but 
neither cover my question.

Appreciate any advice.

FYI, it's just a home serverbut I like it.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance advantages of spool with 2x raidz2 vdevs vs

2010-07-19 Thread tomwaters
Thanks, seems simple.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive - actual performance

2010-04-01 Thread tomwaters
If you see the workload on the wire go through regular patterns of fast/slow 
response
then there are some additional tricks that can be applied to increase the 
overall
throughput and smooth the jaggies. But that is fodder for another post...

Can you pls. elaborate on what can be done here as I am seeing this.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-11 Thread tomwaters
Glad you got it humming!

I got my (2x) 8 port LSI cards from here for $130USD...

http://cgi.ebay.com/BRAND-NEW-SUPERMICRO-AOC-USASLP-L8I-UIO-SAS-RAID_W0QQitemZ280397639429QQcmdZViewItemQQptZLH_DefaultDomain_0?hash=item4149006f05

Works perfectly.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-11 Thread tomwaters
Hi,
  I suspect mine are already IT mode...not sure how to confirm that 
though...I have had no issues.

My controller is showing as C8...odd isn't it.  It's in the 16xPCIE slot at the 
moment...I am not sure how it gets the number...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?

2010-03-08 Thread tomwaters
Hi Jason, I spent months trying different O/S's for my server and finally 
settled on opensolaris.

The o/s is just as easy to install/learn or use than any of the Linux 
variants...and ZFS beats mdadm hands down.

I had a server up and sharing files in under an hour.   Just do it - (you'll 
know soon enough if the hardware is going to support opensolaris). If you are 
unsure, start by installing it to a virtual machine (ie. VirtualBox) and have  
a play.

It's a very helpful community here and you'll get all the support you'll need.

I found these links and they may help get you started...
[url=http://blogs.sun.com/icedawn/entry/bondin]SETTING UP AN OPENSOLARIS NAS 
BOX: FATHER-SON BONDING[/url] - a very simple/easy guide..all you'll needed to 
get a ZFS server up and running.
[url=http://flux.org.uk/howto/solaris/zfs_tutorial_01]zfs tutorial part 
1[/url]
[url=http://malsserver.blogspot.com/2008/08/setting-up-static-network-configuration.html]Setting
 up a static network configuration with NWAM[/url]
[url=http://forums.opensolaris.com/message.jspa?messageID=1125]How do you 
configure OpenSolaris to automatically login a user?[/url]
[url=http://wikis.sun.com/display/OpenSolarisInfo/How+to+Set+Up+Samba+in+the+OpenSolaris+2009.06+Release]How
 to Set Up Samba in the OpenSolaris 2009.06 Release - Information Resources - 
wikis.sun.com[/url]
[url=http://wiki.genunix.org/wiki/index.php/Getting_Started_With_the_Solaris_CIFS_Service]Getting
 Started With the Solaris CIFS Service[/url]
[url=http://blogs.sun.com/pradhap/entry/mount_ntfs_ext2_ext3_in]Mount NTFS / 
Ext2 / Ext3 / FAT 16 / FAT 32 in Solaris[/url] and also 
[url=http://www.iiitmk.ac.in/wiki/index.php/How_to_Mount/Unmount_NTFS,FAT32,ext3_Partitions_in_Opensolaris_5.11_snv_101b]here[/url]
 - note 2G limit.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts pls. : Create 3 way rpool mirror and shelve one mirror as a backup

2010-03-08 Thread tomwaters
Thanks guys,
 It's all working perfectly so farand very easy too.

Given that my boot disks (consumer laptop drives) cost only ~$60AUD each, it's 
a cheap way to maintain high availability and backup.

ZFS does not seem to mind having one of the 3 offline so I'd recomend this to 
others looking for a simple offline backup solution (just remember to install 
grub on the spare to make it bootable)...but you'd test your backups anyway 
before you send them offsite...wouldn't you :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Thoughts pls. : Create 3 way rpool mirror and shelve one mirror as a backup

2010-03-06 Thread tomwaters
Hi guys,
   On my home server (2009.6) I have a 2 HDD's in a mirrored rpool.

I just added a 3rd to the mirror and made all disks bootable (ie. installgrub 
on the mirror disks).

My though is this, I remove the 3rd mirror disk and offsite it as a backup.

That way if I mess up the rpool, I can get back the offsite HDD, boot from it 
and re-mirror this to the other 2 HDD's and I am back in business.

I plan to leave the 3rd mirror device in the rpool (just no HDD loaded so it 
will show as degraded all the time). On a monthly basis, I'll physically insert 
the 3rd HDD and get it to resilver and then remove the 3rd hdd offsite again - 
ie. refresh the backup.

Anyone see any flaws in this plan?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's the advantage of using multiple filesystems in a pool

2010-03-02 Thread tomwaters
Ahh, interesting...once I get the data realatively stable in some of those 
sub-folders I wil create a file system, move the data in there and setup the 
snapshot for those that are relatively static...now I just need to do a load of 
reading about snapshots!

Thanks again...sp much to learn.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's the advantage of using multiple filesystems in a pool

2010-03-01 Thread tomwaters
Thanks for that Erik.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] What's the advantage of using multiple filesystems in a pool

2010-02-28 Thread tomwaters
Hi guys, on my home server I have a variety of directories under a single 
pool/filesystem, Cloud.

Things like
cloud/movies  - 4TB
cloud/music - 100Gig
cloud/winbackups  - 1TB
cloud/data   - 1TB

etc.

After doing some reading, I see recomendations to have separate filesystem to 
improve performance...but not sure how as it's the same pool?

Can someone help me understand if/why I should use separate file systems for 
these?

ta.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-25 Thread tomwaters
Yes, I am glad that I learned this lesson now, rather than in 6 months when I 
have re-purposed the exiting drives...makes me all the more committed to 
maintaining an up to date remote backup.

The reality is that I can not afford to mirror the 8TB in the zpool, so I'll 
balance the risk and just backup the important items like music, document and 
photo's.

ISO's and DVD's I can always recreate.

Thanks guys for the support. I'll do alot more reading on ZFS/datasets etc. 
befor I mess with the server again!...hmmm, actually, I just had a thought, I 
will also create a playpool when I can try out things before using them on the 
main pool.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding a zfs mirror drive to rpool - new drive formats to one cylinder less

2010-02-25 Thread tomwaters
Just an update on this, I was seeing high CPU utilisation (100% on all 4 cores) 
for ~10 seconds every 20 seconds when transfering files to the server using 
Samba under 133.

So I rebooted and selected 111b and I no longer have the issue. Interestingly, 
the rpool is still in place..as it should be.  So I have now set this 111b as 
my default BE ...and removed /dev from the update package list using ...
$pfexec pkg set-publisher -O  http://pkg.opensolaris.org/release opensolaris.org

I am not sure if I had to run the installgrub again, but I did anyway.
#installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c8t1d0s0

So all looking good now.  I have my rpool mirror and am back on the stable 
release version.

I am beginning to like opensolaris.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding a zfs mirror drive to rpool - new drive formats to one cylinder less

2010-02-24 Thread tomwaters
Thanks David.

Re. the starting cylinder, it was more that one c8t0d0 the partition started at 
zero and c8t1d0 it started at 1.

ie.
c8t0d0
Partition Status Type Start End Length %
= ==  = === == ===
1 Active Solaris2 0 30401 30402 100

c8t1d0:
Partition Status Type Start End Length %
= ==  = === == ===
1 Active Solaris2 1 30400 30400 100

All good now and yes, i did run installgrub...yet to test booting from c8t1d0 
(with c8t0d0 removed).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-24 Thread tomwaters
Ok, I know NOW that I should have used zfs rename...but just for the record, 
and to give you folks a laugh, this is the mistake I made...

I created a zfs file system, cloud/movies and shared it.
I then filled it with movies and music.
I then decided to rename it, so I used rename in the Gnome to change the folder 
name to media...ie cloud/media.  MISTAKE
I then noticed the zfs share was pointing to /cloud/movies which no longer 
exists.
So, I removed cloud/movies with zfs destroy --- BIGGER MISTAKE

So, now I am restoring my media from backup.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-24 Thread tomwaters
Ok, I know NOW that I should have used zfs rename...but just for the record, 
and to give you folks a laugh, this is the mistake I made...

I created a zfs file system, cloud/movies and shared it.
I then filled it with movies and music.
I then decided to rename it, so I used rename in the Gnome to change the folder 
name to media...ie cloud/media.  MISTAKE
I then noticed the zfs share was pointing to /cloud/movies which no longer 
exists.
So, I removed cloud/movies with zfs destroy --- BIGGER MISTAKE

So, now I am restoring my media from backup.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-24 Thread tomwaters
well, both I guess...

I thought the dataset name was based upon the file system...so I was assuming 
that if i renamed the zfs filesystem (with zfs rename) it would also rename the 
dataset...

ie...
#zfs create tank/fred
gives...
NAMEUSED  AVAIL  REFER  MOUNTPOINT 
tank/fred   26.0K  4.81G  10.0K  /tank/fred

 and then 
#zfs rename tank/fred tank/mary
will give...
NAMEUSED  AVAIL  REFER  MOUNTPOINT 
tank/mary   26.0K  4.81G  10.0K  /tank/mary

It's all rather confusing to a newbit like me I must admit...so please post 
examples so I can understand it.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding a zfs mirror drive to rpool - new drive formats to one cylinder less

2010-02-23 Thread tomwaters
Looks like an issue with the start /length of the partition table...

These are the disks from fomrat...
   8. c8t0d0 DEFAULT cyl 30399 alt 2 hd 255 sec 63
  /p...@0,0/pci8086,3...@1f,2/d...@0,0
   9. c8t1d0 DEFAULT cyl 30398 alt 2 hd 255 sec 63
  /p...@0,0/pci8086,3...@1f,2/d...@1,0

Loking at the partitions, the existing rpool disk is formatted like this..

 Total disk size is 30401 cylinders
 Cylinder size is 16065 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1   ActiveSolaris2  0  3040130402100


The new disk is formatted like this...
 Total disk size is 30401 cylinders
 Cylinder size is 16065 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1   ActiveSolaris2  1  3040030400100


I tried to manually create a partition by selecting part from the menu but I 
can not tell it to start at cyl. 0 or go to 30399...

Help!

Why is this so hard!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding a zfs mirror drive to rpool - new drive formats to one cylinder less

2010-02-23 Thread tomwaters
Thanks for that.

It seems strange though that the two disks, which are from the same 
manufacturer, same model, same firmware and similar batch/serial's behave 
differently.

I am also puzzled that the rpool disk appears to start at cylinder 0 and not 1.

I did find this quote after googling for the CR6844090 at the best practices 
page http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

The size of the replacements vdev, measured by usable sectors, must be the 
same or greater than the vdev being replaced. This can be confusing when whole 
disks are used because different models of disks may provide a different number 
of usable sectors. For example, if a pool was created with a 500 GByte drive 
and you need to replace it with another 500 GByte drive, then you may not be 
able to do so if the drives are not of the same make, model, and firmware 
revision. Consider planning ahead and reserving some space by creating a slice 
which is smaller than the whole disk instead of the whole disk. In Nevada, 
build 117, it might be possible to replace or attach a disk that is slight 
smaller than the other disks in a pool. This is CR 6844090. 
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss