Re: [zfs-discuss] ZFS Project Hardware

2008-05-28 Thread James Andrewartha
Erik Trimble wrote:
 On a related note - does anyone know of a good Solaris-supported 4+ port 
 SATA card for PCI-Express?  Preferably 1x or 4x slots...

 From what I can tell, all the vendors are only making SAS controllers for 
PCIe with more than 4 ports. Since SAS supports SATA, I guess they don't see 
much point in doing SATA-only controllers.

For example, the LSI SAS3081E-R is $260 for 8 SAS ports on 8x PCIe, which is 
somewhat more expensive than the almost equivalent PCI-X LSI SAS3080X-R 
which is as low as $180.

For those downthread looking for full RAID controllers with battery backup 
RAM, Areca (who formerly specialised in SATA controlers) now do SAS RAID at 
reasonable prices, and have Solaris drivers.

-- 
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling ZFS ACL

2008-05-28 Thread kevin kramer
that is my thread and I'm still having issues even after applying that patch. 
It just came up again this week.

[locahost] uname -a
Linux dv-121-25.centtech.com 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5 11:37:38 EST 
2008 x86_64 x86_64 x86_64 GNU/Linux
[localhost] cat /etc/issue
CentOS release 5 (Final)
Kernel \r on an \m

[localhost: /n/scr20] touch test
[localhost: /n/scr20] mv test /n/scr01/test/ ** this is a UFS mount on FreeBSD

mv: preserving permissions for `/n/scr01/test/test': Operation not supported
mv: preserving ACL for `/n/scr01/test/test': Operation not supported
mv: preserving permissions for `/n/scr01/test/test': Operation not supported

If I move it to the local /tmp, I get no errors.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling ZFS ACL

2008-05-28 Thread Mark Shellenbaum
kevin kramer wrote:
 that is my thread and I'm still having issues even after applying that patch. 
 It just came up again this week.
 
 [locahost] uname -a
 Linux dv-121-25.centtech.com 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5 11:37:38 EST 
 2008 x86_64 x86_64 x86_64 GNU/Linux
 [localhost] cat /etc/issue
 CentOS release 5 (Final)
 Kernel \r on an \m
 
 [localhost: /n/scr20] touch test
 [localhost: /n/scr20] mv test /n/scr01/test/ ** this is a UFS mount on FreeBSD
 
 mv: preserving permissions for `/n/scr01/test/test': Operation not supported
 mv: preserving ACL for `/n/scr01/test/test': Operation not supported
 mv: preserving permissions for `/n/scr01/test/test': Operation not supported
 

Thats because the UFS mount doesn't support NFSv4 ACLs and 'mv' was 
unable to translate the ACL into an equivalent POSIX draft ACL.

 If I move it to the local /tmp, I get no errors.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling ZFS ACL

2008-05-28 Thread Andy Lubel
Did you try mounting with nfs version 3?

mount -o vers=3

On May 28, 2008, at 10:38 AM, kevin kramer wrote:

 that is my thread and I'm still having issues even after applying  
 that patch. It just came up again this week.

 [locahost] uname -a
 Linux dv-121-25.centtech.com 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5  
 11:37:38 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
 [localhost] cat /etc/issue
 CentOS release 5 (Final)
 Kernel \r on an \m

 [localhost: /n/scr20] touch test
 [localhost: /n/scr20] mv test /n/scr01/test/ ** this is a UFS mount  
 on FreeBSD

 mv: preserving permissions for `/n/scr01/test/test': Operation not  
 supported
 mv: preserving ACL for `/n/scr01/test/test': Operation not supported
 mv: preserving permissions for `/n/scr01/test/test': Operation not  
 supported

 If I move it to the local /tmp, I get no errors.


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Problems under vmware

2008-05-28 Thread Gabriele Bulfon
Hello, I'm having the same exact situation on one VM, and not on another VM on 
the same infrastructure.
The only difference is that on the failing VM I initially created the pool with 
a name and then changed the mountpoint to another name.
Did you found a solution to the issue?
Should I consider to get back to UFS on this infrastructure?
Thanx a lot
Gabriele.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs hangs after disk failure

2008-05-28 Thread J. Les Bemont
At home I have an old ultra-60 running attached to a scsi shoebox with 6x18GB 
disks.  I created the zpool as raidZ with one hot spare.  Recently, one of the 
non hot spare disks failed and now zpool commands hang.  Also, I/O to the pool 
just hangs for periods of time.  I'm using release 0807 with latest patches as 
of around January.  Is this fixed in a later patch?  What can I do to fix this? 
  I don't have another disk to replace  the failed one with.

Thanks.

--Les
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [blog post] Trying to corrupt data in a ZFS mirror

2008-05-28 Thread Silveira Neto
Hi guys, I wrote my first post about ZFS (
http://silveiraneto.net/2008/05/28/trying-to-corrupt-data-in-a-zfs-mirror/)
showing how to create a pool with a mirror and so trying to corrupt the
data. I used the Self Healing with
ZFShttp://opensolaris.org/os/community/zfs/demos/selfheal/as base.
I also did a screencast available on Youtube (
http://youtube.com/watch?v=C44tnu8bus4).
Please take a look and if you found any mistake, comment on my blog.

Thanks.

-- 
---
silveiraneto.net
eupodiatamatando.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-28 Thread Bill McGonigle
On May 28, 2008, at 05:11, James Andrewartha wrote:

  From what I can tell, all the vendors are only making SAS  
 controllers for
 PCIe with more than 4 ports. Since SAS supports SATA, I guess they  
 don't see
 much point in doing SATA-only controllers.

 For example, the LSI SAS3081E-R is $260 for 8 SAS ports on 8x PCIe,  
 which is
 somewhat more expensive than the almost equivalent PCI-X LSI  
 SAS3080X-R
 which is as low as $180.

That's not a huge price difference when building a server - thanks  
for the pointer.  Are there any 'gotchas' the list can offer when  
using a SAS card with SATA drives?   I've been told that SATA drives  
can have a lower MTBF than SAS drives (by a guy working QA for  
BigDriveCo), but ZFS helps keep the I in RAID.

 For those downthread looking for full RAID controllers with battery  
 backup
 RAM, Areca (who formerly specialised in SATA controlers) now do SAS  
 RAID at
 reasonable prices, and have Solaris drivers.

I've seen posts about misery with the sil and marvell drivers from  
about a year ago; is there a good way to pound an opensolaris driver  
to find its holes, in a ZFS context?  On one hand I'd guess it  
shouldn't be too hard to simulate different kinds of loads, but on  
the other hand, if that were easy, the drivers' authors would have  
done that before unleashing buggy code on the masses.

Thanks,
-Bill

-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hangs after disk failure

2008-05-28 Thread Richard Elling
J. Les Bemont wrote:
 At home I have an old ultra-60 running attached to a scsi shoebox with 6x18GB 
 disks.  I created the zpool as raidZ with one hot spare.  Recently, one of 
 the non hot spare disks failed and now zpool commands hang.  Also, I/O to the 
 pool just hangs for periods of time.  I'm using release 0807 with latest 
 patches as of around January.  Is this fixed in a later patch?  What can I do 
 to fix this?   I don't have another disk to replace  the failed one with.
   

It is patiently waiting.  Try physically removing the bad disk, which
will fail faster.  More sophisticated fault management has been
implemented in SXCE/OpenSolaris and should arrive in Solaris 10
update 6.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hangs after disk failure

2008-05-28 Thread Tim
On Wed, May 28, 2008 at 11:20 AM, Richard Elling [EMAIL PROTECTED]
wrote:

 J. Les Bemont wrote:
  At home I have an old ultra-60 running attached to a scsi shoebox with
 6x18GB disks.  I created the zpool as raidZ with one hot spare.  Recently,
 one of the non hot spare disks failed and now zpool commands hang.  Also,
 I/O to the pool just hangs for periods of time.  I'm using release 0807 with
 latest patches as of around January.  Is this fixed in a later patch?  What
 can I do to fix this?   I don't have another disk to replace  the failed one
 with.
 

 It is patiently waiting.  Try physically removing the bad disk, which
 will fail faster.  More sophisticated fault management has been
 implemented in SXCE/OpenSolaris and should arrive in Solaris 10
 update 6.
  -- richard

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




Is there a way to tell zfs to manually fail it vs. physically removing the
drive?  Having access to physically remove a disk isn't always possible :)

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-28 Thread Richard Elling
Bill McGonigle wrote:
 On May 28, 2008, at 05:11, James Andrewartha wrote:

   
  From what I can tell, all the vendors are only making SAS  
 controllers for
 PCIe with more than 4 ports. Since SAS supports SATA, I guess they  
 don't see
 much point in doing SATA-only controllers.

 For example, the LSI SAS3081E-R is $260 for 8 SAS ports on 8x PCIe,  
 which is
 somewhat more expensive than the almost equivalent PCI-X LSI  
 SAS3080X-R
 which is as low as $180.
 

 That's not a huge price difference when building a server - thanks  
 for the pointer.  Are there any 'gotchas' the list can offer when  
 using a SAS card with SATA drives?   I've been told that SATA drives  
 can have a lower MTBF than SAS drives (by a guy working QA for  
 BigDriveCo), but ZFS helps keep the I in RAID.
   

There are BigDriveCos which sell enterprise-class SATA drives.
Since the mechanics are the same, the difference is in the electronics
and software.  Vote with your pocketbook for the enterprise-class
products.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-28 Thread Keith Bierman

On May 28, 2008, at 10:27 AM   5/28/, Richard Elling wrote:

 Since the mechanics are the same, the difference is in the electronics


In my very distant past, I did QA work for an electronic component  
manufacturer. Even parts which were identical were expected to  
behave quite differently ... based on population statistics. That is,  
the HighRel MilSpec parts were from batches with no failures (even  
under very harsh conditions beyond the normal operating mode, and all  
tests to destruction showed only the expected failure modes) and the  
hobbyist grade components were those whose cohort *failed* all the  
testing (and destructive testing could highlight abnormal failure  
modes).

I don't know that drive builders do the same thing, but I'd kinda  
expect it.

-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hangs after disk failure

2008-05-28 Thread Richard Elling
Tim wrote:


 On Wed, May 28, 2008 at 11:20 AM, Richard Elling 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 J. Les Bemont wrote:
  At home I have an old ultra-60 running attached to a scsi
 shoebox with 6x18GB disks.  I created the zpool as raidZ with one
 hot spare.  Recently, one of the non hot spare disks failed and
 now zpool commands hang.  Also, I/O to the pool just hangs for
 periods of time.  I'm using release 0807 with latest patches as of
 around January.  Is this fixed in a later patch?  What can I do to
 fix this?   I don't have another disk to replace  the failed one with.
 

 It is patiently waiting.  Try physically removing the bad disk, which
 will fail faster.  More sophisticated fault management has been
 implemented in SXCE/OpenSolaris and should arrive in Solaris 10
 update 6.
  -- richard

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




 Is there a way to tell zfs to manually fail it vs. physically removing 
 the drive?  Having access to physically remove a disk isn't always 
 possible :)

zpool offline
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-28 Thread Mertol Ozyoney
I strongly agree most of the comments. I quess, I tried to keep it simple,
perhaps a little bit too simple. 

If I am not mistaken ,most of the Nand disks will virtualize the underlying
cells so even you update the same sector update will be made somewhere else.

So the time to corrupt an enterprise grade SSD (nand based) will be quite
long although I wouldn't recommend to keep the swap file or  any sort of
fast changing cache on those drives. 

Think that you have a 146 GB SSD and the wirte cycle is around 100k 
And you can write/update data at 10 MB/sec (depends on the IO pattern could
be a lot slower or a lot higher) It will take 4 Hours or 14,400 sec's to
fully populate the drive. Multiply this with 100k , this is 45 Years. If the
virtualisation algorithmws work at %25 efficiency this will be 10 years
plus. 

And if I am not mistaken all enterprise NAnds and most consumer Nands do
read after write verify and they will mark bad blocks. This will also
increase the usable time as you will not be marking a whole device failed ,
just a cell...

Please correct me where I am wrong , as I am not quite knowledgeble on this
subject

Mertol 






Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bob Friesenhahn
Sent: 27 Mayıs 2008 Salı 18:55
To: Mertol Ozyoney
Cc: 'ZFS Discuss'
Subject: Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

On Mon, 26 May 2008, Mertol Ozyoney wrote:

 It's true that NAND based falsh's wear out under heavy load. Regular
 consumer grade nand drives will wear out the extra cells pretty rapidly.
(in
 a year or so) However enterprise grade SSD disks are fine tuned to with
 stand continous writes for more than 10 years

It is incorrect to classify wear in terms of years without also 
specifying update behavior.  NAND FLASH sectors can withstand 100,000 
to (sometimes) 1,000,000 write-erase-cycles.  In normal filesystem 
use, there are far more reads than writes and the size of the storage 
device is much larger than the the data re-written.  Even in server 
use, only a small fraction of the data is updated.  A device used to 
cache writes will be written to as often as it is read from (or 
perhaps more often).  If the cache device storage is fully occupied, 
then wear leveling algorithms based on statistics do not have much 
opportunity to work.

If the underlying device sectors are good for 100,000 
write-erase-cycles and the entire device is re-written once per 
second, then the device is not going to last very long (27 hours). 
Of course the write performance for these devices is quite poor 
(8-120MB/second) and the write performace seems to be proportional to 
the total storage size so it is quite unlikely that you could re-write 
a suitably performant device once per second.  The performance of 
FLASH SSDs does not seem very appropriate for use as a write cache 
device.

There is a useful guide to these devices at 
http://www.storagesearch.com/ssd-buyers-guide.html;.

SRAM-based cache devices which plug into a PCI-X or PCI-Express slot 
seem far more appropriate for use as a write cache than a slow SATA 
device.  At least 5X or 10X the performance is available by this 
means.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-28 Thread Mertol Ozyoney
By the way. All enterprise SSD's have internal Dram based cache. Some
vendors may quote the write performance of the internal RAM device.  
Normally Nand drives due to read after write operations and several other
reasons will not perform quite good under write based load. 


Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bob Friesenhahn
Sent: 27 Mayıs 2008 Salı 20:22
To: Tim
Cc: ZFS Discuss
Subject: Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

On Tue, 27 May 2008, Tim wrote:

 You're still concentrating on consumer level drives.  The stec drives
 emc is using for instance, exhibit none of the behaviors you describe.

How long have you been working for STEC? ;-)

Looking at the specifications for STEC SSDs I see that they are very 
good at IOPS (probably many times faster than the Solaris I/O stack). 
Write performance of the fastest product (ZEUS iops) is similar to a 
typical SAS hard drive, with the remaining products being much slower.

This all that STEC has to say about FLASH lifetime in their products:
http://www.stec-inc.com/technology/flash_life_support.php;.  There 
are no hard facts to be found there.

The STEC SSDs are targeted towards being a replacement for a 
traditional hard drive.  There is no mention of lifetime when used as 
a write-intensive cache device.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-28 Thread Bob Friesenhahn
On Wed, 28 May 2008, Mertol Ozyoney wrote:

 Think that you have a 146 GB SSD and the wirte cycle is around 100k
 And you can write/update data at 10 MB/sec (depends on the IO pattern could
 be a lot slower or a lot higher) It will take 4 Hours or 14,400 sec's to
 fully populate the drive. Multiply this with 100k , this is 45 Years. If the
 virtualisation algorithmws work at %25 efficiency this will be 10 years
 plus.

 Please correct me where I am wrong , as I am not quite knowledgeble on this
 subject

It seems that we are in agreement that expected lifetime depends on 
the usage model.  Lifetime will be vastly longer if the drive is used 
as a normal filesystem disk as compared to being using as a RAID write 
cache device.  I have not heard of any RAID arrays which use FLASH for 
their write cache.  They all use battery backed SRAM (or similar).

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-28 Thread James Andrewartha
On Wed, 2008-05-28 at 10:34 -0600, Keith Bierman wrote:
 On May 28, 2008, at 10:27 AM   5/28/, Richard Elling wrote:
 
  Since the mechanics are the same, the difference is in the electronics
 
 
 In my very distant past, I did QA work for an electronic component  
 manufacturer. Even parts which were identical were expected to  
 behave quite differently ... based on population statistics. That is,  
 the HighRel MilSpec parts were from batches with no failures (even  
 under very harsh conditions beyond the normal operating mode, and all  
 tests to destruction showed only the expected failure modes) and the  
 hobbyist grade components were those whose cohort *failed* all the  
 testing (and destructive testing could highlight abnormal failure  
 modes).
 
 I don't know that drive builders do the same thing, but I'd kinda  
 expect it.

Seagate's ES.2 has a higher MBTF than the equivalent consumer drive, so
you're probably right. Western Digital's RE2 series (which my work uses)
comes with a 5 year warranty, compared to 3 years for the consumer
versions. The RE2 also have firmware with Time-Limited Error Recovery,
which reports errors promptly, letting the higher-level RAID do data
recovery. Both have improved vibration tolerance through firmware
tweaks. And if you want 10krpm, I think WD's VelociRaptor counts.
http://www.techreport.com/articles.x/13732
http://www.techreport.com/articles.x/13253
http://www.techreport.com/articles.x/14583
http://www.storagereview.com/ is promising some SSD benchmarks soon.

James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS sharing options for Windows

2008-05-28 Thread Craig Smith
Hello, I am fairly new to Solaris and ZFS. I am testing both out in a sandbox 
at work. I am playing with virtual machines running on a windows front-end that 
connects to a zfs back-end for its data needs. As far as i know my two options 
are sharesmb and shareiscsci for data sharing. I have a couple questions about 
which way I should go. Is there a performance increase by using iSCSI? If I go 
with iSCSI will I have to then format NTFS on the Windows iSCSI disk? I would 
want the ability to create snapshots of the virtual disks, If I have to format 
the iSCSI target with NTFS on the windows machine ZFS would not see the 
individual files correct? Again this is new territory for me and I have been 
doing a lot of reading. Thanks in advance for any input. P.S. I know this is 
not an ideal situation for VM or storage, but it is what I have been given to 
work with.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-28 Thread Brandon High
On Wed, May 28, 2008 at 9:27 AM, Richard Elling [EMAIL PROTECTED] wrote:
 There are BigDriveCos which sell enterprise-class SATA drives.
 Since the mechanics are the same, the difference is in the electronics
 and software.  Vote with your pocketbook for the enterprise-class
 products.

CMU released a study comparing the MTBF enterprise class drive with
consumer drives, and found no real differences.

From the study:
In our data sets, the replacement rates of SATA disks are not worse
than the replacement rates of SCSI or FC disks. This may indicate that
disk-independent factors, such as operating conditions, usage and
environmental factors affect replacement rates more than component
specific factors.

Google has also released a similar study on drive reliability.
Google's sample size is considerably larger than CMU's as well.
There's a blurb here: http://news.bbc.co.uk/2/hi/technology/6376021.stm
Full results here: http://research.google.com/archive/disk_failures.pdf

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-28 Thread Bob Friesenhahn
On Wed, 28 May 2008, Brandon High wrote:
 CMU released a study comparing the MTBF enterprise class drive with
 consumer drives, and found no real differences.

That should really not be a surprise.  Chips are chips and in the 
economies of scale, as few chips will be be used as possible.  The 
quality of manufacture could vary, but this is likely more dependent 
on the manufacturer than the product line.  Manufacturers who produce 
crummy products don't last very long.

True enterprise drives (SCSA, SAS, FC) have much lower media read 
error rates by an factor of 10 and more tolerance to vibration and 
temperature.  They also have much lower storage capacity and much 
better seek and I/O performance.  Failure to read a block is not a 
failure of the drive so this won't be considered by any study which 
only considers drive replacement.

SATA enterprise drives seem more like a gimmick than anything else. 
Perhaps the warranty is longer and they include a tiny bit more smarts 
in the firmware.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-28 Thread Richard Elling
http://blogs.sun.com/relling/entry/adaptec_webinar_on_disks_and
 -- richard

Bob Friesenhahn wrote:
 On Wed, 28 May 2008, Brandon High wrote:
   
 CMU released a study comparing the MTBF enterprise class drive with
 consumer drives, and found no real differences.
 

 That should really not be a surprise.  Chips are chips and in the 
 economies of scale, as few chips will be be used as possible.  The 
 quality of manufacture could vary, but this is likely more dependent 
 on the manufacturer than the product line.  Manufacturers who produce 
 crummy products don't last very long.

 True enterprise drives (SCSA, SAS, FC) have much lower media read 
 error rates by an factor of 10 and more tolerance to vibration and 
 temperature.  They also have much lower storage capacity and much 
 better seek and I/O performance.  Failure to read a block is not a 
 failure of the drive so this won't be considered by any study which 
 only considers drive replacement.

 SATA enterprise drives seem more like a gimmick than anything else. 
 Perhaps the warranty is longer and they include a tiny bit more smarts 
 in the firmware.

   
 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Create ZFS now, add mirror later

2008-05-28 Thread E. Mike Durbin
Is there a way to to a create a zfs file system
(e.g. zpool create boot /dev/dsk/c0t0d0s1)

Then, (after vacating the old boot disk) add another
device and make the zpool a mirror?

(as in: zpool create boot mirror /dev/dsk/c0t0d0s1 /dev/dsk/c1t0d0s1)

Thanks!

emike
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Create ZFS now, add mirror later

2008-05-28 Thread Richard Elling
E. Mike Durbin wrote:
 Is there a way to to a create a zfs file system
 (e.g. zpool create boot /dev/dsk/c0t0d0s1)

 Then, (after vacating the old boot disk) add another
 device and make the zpool a mirror?
   

zpool attach
 -- richard

 (as in: zpool create boot mirror /dev/dsk/c0t0d0s1 /dev/dsk/c1t0d0s1)

 Thanks!

 emike
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS locking up! Bug in ZFS, device drivers or time for mobo RMA?

2008-05-28 Thread Muti Zen
Greetings all.

I am facing serious problems running ZFS on a storage server assembled out of 
commodity hardware that is supposed to be Solaris compatible.

Although I am quite familiar with Linux distros and other unices, I am new to 
Solaris so any suggestions are highly appreciated.



First I tried SXDE 1/08 creating the following pool:

-bash-3.2# zpool status -v tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
c5t0d0  ONLINE   0 0 0
c5t1d0  ONLINE   0 0 0
c6t1d0  ONLINE   0 0 0
c6t0d0  ONLINE   0 0 0
c7t0d0  ONLINE   0 0 0

errors: No known data errors

All went well until I tried pulling files from the server to another machine 
running 64bit Vista Ultimate SP1 via its build in NFS client. After copying 
cca. 100 of them (split archives all 100MB in size, i.e. cca. 10GB of data) I 
always get an The semaphore timeout period has expired. error. The machines 
are currently connected by a 1Gbps switch, but I have tried several other 
devices as well (some supporting only 100Mbps).


When this happens, Solaris is still responsive but any zpool command I try 
locks up. E.g. zpool status tank would write just the following

  pool: tank
 state: ONLINE
 scrub: none requested

and then lock up.


This gives me the impression that after several minutes of usage, the ZFS 
subsystem on the machine locks up and anything that tries to touch it locks up 
as well.

The only way I found to make the server run again would be a hardware reset. 
Software reboot/shutdown locks up as well.


Another possibly related problem I had was that instead of or in addition to 
this lock up, ZFS degraded my pool considering one of the discs as faulty. 

Always the same one, regardless of the port it was plugged in.

The weird thing is though that the disc appears to be perfectly functional. 
Running the thorough Samsung ESTOOL diagnostic on it many times discovered no 
problems. Cleaning the errors and scrubbing the pool would make it operational 
again, at least for a while.


I have replaced the SXDE with OpenSolaris 2008.05 but it didnt seem to affect 
these problems at all.

I bought more discs hoping that replacing the faulting one would solve the 
problems. Unfortunately it did not solve all of them. The array doesnt degrade 
due to a faulting disc anymore, but ZFS still seems to be locking up after 
several minutes of usage.



Thanks in advance for any suggestions how to best approach these problems.



Server HW:

Mobo:   MSI K9N Diamond
CPU:Athlon 64 X2 5200+
Mem:Corsair TWIN2x4096-6400C4DHX
PSU:Corsair HX620W
Case:   ThermalTake Armor+
GFX:MSI N9600GT-T2D1G-OC
HDDs:   Spinpoint F1 HD103UJ (1TB, 32MB, 7200rpm, SATA2)

All HDDs are the same model.
The machine is not overclocked.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Create ZFS now, add mirror later

2008-05-28 Thread W. Wayne Liauh
 E. Mike Durbin wrote:
  Is there a way to to a create a zfs file system
  (e.g. zpool create boot /dev/dsk/c0t0d0s1)
 
  Then, (after vacating the old boot disk) add
 another
  device and make the zpool a mirror?

 
 zpool attach
  -- richard
  (as in: zpool create boot mirror /dev/dsk/c0t0d0s1
 /dev/dsk/c1t0d0s1)
 
  Thanks!
 
  emike
   
   

Thanks, but for starters, where is the best place to find info like this (i.e., 
the easiest to get started on zfs)?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss