[zfs-discuss] nfs issues

2010-10-08 Thread Thomas Burgess
I'm having some very strange nfs issues that are driving me somewhat mad.

I'm running b134 and have been for months now, without issue.  Recently i
enabled 2 services to get bonjoir notificatons working in osx

/network/dns/multicast:default
/system/avahi-bridge-dsd:default


and i added a few .service files to /etc/avahi/services/

ever since doing this, nfs is keeps crashing (i'd say every 30 minutes or
so)  and falling into the maintenance state, on top of that, i see a TON
of core files in /


bin core.mountd.1286564239  core.mountd.1286576077
 core.mountd.1286579221  core.mountd.1286583281  core.mountd.1286583420
 core.mountd.1286586498  etc lib net   roottank
bootcore.mountd.1286574167  core.mountd.1286576084
 core.mountd.1286579228  core.mountd.1286583284  core.mountd.1286583423
 core.mountd.1286586501  export  lost+found  opt   rpool   tmp
cdrom   core.mountd.1286574170  core.mountd.1286579150
 core.mountd.1286583275  core.mountd.1286583355  core.mountd.1286586406  dev
homemedia   platform  sbinusr
core.mountd.1286564233  core.mountd.1286574173  core.mountd.1286579153
 core.mountd.1286583278  core.mountd.1286583418  core.mountd.1286586409
 devices kernel  mnt proc


running pstack on them shows:




wonsl...@wonslung-raidz2:/tank/nas/dump/Done# pstack
/core.mountd.1286564233
core '/core.mountd.1286564233' of 22940: /usr/lib/nfs/mountd
-  lwp# 1 / thread# 1  
 feeefd1b __lwp_park (fee12a00, 0, fe5f9588, 0) + b
 feee7beb mutex_lock_impl (fe5f9588, 0, 8047d58, 80e6f50, 80e7030, fe5f3000)
+ 163
 feee7d28 mutex_lock (fe5f9588) + 10
 fe5c3cf5 _svc_run_mt (fe5f4838, fe5f4848, fe5f4858, fe5f4858, 805552d,
feeca118) + 69
 fe5c38eb svc_run  (1, 2328, 1, 0, 8047dfc, feffb804) + 77
 0805552d main (1, 8047e40, 8047e48, 8047dfc) + 4f9
 080548ed _start   (1, 8047ee0, 0, 8047ef4, 8047f11, 8047f22) + 7d
-  lwp# 2 / thread# 2  
 feef4367 __pause  (8, 200, 8, 5, fe000, fef82000) + 7
 08054de7 nfsauth_svc (0, fef82000, fed4efe8, feeef9fe) + 3b
 feeefa53 _thrp_setup (fedf0a00) + 9b
 feeefce0 _lwp_start (fedf0a00, 0, 0, 0, 0, 0)
-  lwp# 3 / thread# 3  
 feef4367 __pause  (8, 200, 8, 6, fe000, fef82000) + 7
 08054e43 cmd_svc  (0, fef82000, fe46efe8, feeef9fe) + 3b
 feeefa53 _thrp_setup (fedf1200) + 9b
 feeefce0 _lwp_start (fedf1200, 0, 0, 0, 0, 0)
-  lwp# 4 / thread# 4  
 08054f49 do_logging_queue (8071110, 806e888, fe36ffc8, 805501a) + 45
 0805502e logging_svc (0, fef82000, fe36ffe8, feeef9fe) + 52
 feeefa53 _thrp_setup (fedf1a00) + 9b
 feeefce0 _lwp_start (fedf1a00, 0, 0, 0, 0, 0)
-  lwp# 5 / thread# 5  
 feef4ca1 __door_return (fe270d2c, 8, 0, 0) + 21
 08059280 nfsauth_func (0, fe270dc4, 3c, 0, 0, 8059108) + 178
 feef4cbe __door_return () + 3e
-  lwp# 6 / thread# 6  
 feef4ca1 __door_return (0, 0, 0, 0) + 21
 feedb63f door_create_func (0, fef82000, fe171fe8, feeef9fe) + 2f
 feeefa53 _thrp_setup (fedf2a00) + 9b
 feeefce0 _lwp_start (fedf2a00, 0, 0, 0, 0, 0)
-  lwp# 8 / thread# 8  
 feef4387 __pollsys (80ca168, 9, 0, 0, fe5f8e38, fef82000) + 7
 fee987f4 poll (80ca168, 9, , fe5c3fc7) + 4c
 fe5c3e49 _svc_run_mt (0, fef82000, fdf73fe8, feeef9fe) + 1bd
 feeefa53 _thrp_setup (fedf3a00) + 9b
 feeefce0 _lwp_start (fedf3a00, 0, 0, 0, 0, 0)



anyways, i am not an expert and don't really know how to troubleshoot this,
so if someone could help, i'd really appreciate it
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup and handling corruptions - impossible?

2010-08-22 Thread Thomas Burgess
You are saying ZFS will detect and rectify this kind of corruption in a
 deduped pool automatically if enough redundancy is present? Can that fail
 sometimes? Under what conditions?

 I would hate to restore a 1.5TB pool from backup just because one 5MB file
 is gone bust. And I have a known good copy of the file.

 I raised a technical question and you are going all personal on me.
 --
 This message posted from opensolaris.org


zfs checksums every transaction.  When you access a file, it checks that the
checksums match.  If they do not (corruption) and you have redundancy, it
repairs the corruption.  It can detect and corrupt corruption in this way.


It didnt' seem like anyone got personal with you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrade Nevada Kernel

2010-08-21 Thread Thomas Burgess
you can upgrade by changing to the dev repositoryor if you don't mind
re-installing you can download the b134 image at genunix

http://www.genunix.org/

On Sat, Aug 21, 2010 at 1:25 AM, Long Tran opensolaris.stor...@gmail.comwrote:

 Hi,
 I hit ZFS bug that it would be resolve in latter snv 134 or latter.
 I'm running SNV111
 How do I upgrade to latest version ?

 THanks


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Halcyon ZFS and system monitoring software for OpenSolaris (beta)

2010-08-20 Thread Thomas Burgess
On Thu, Aug 19, 2010 at 4:33 PM, Mike Kirk mike.k...@halcyoninc.com wrote:

 Hi all,

 Halcyon recently started to add ZFS pool stats to our Solaris Agent, and
 because many people were interested in the previous OpenSolaris beta* we've
 rolled it into our OpenSolaris build as well.

 I've already heard some great feedback about supporting ZIL and ARC stats,
 which we're hoping to add soon. If you'd like to see what we have now, and
 maybe try it on your OpenSolaris system, please see the download/screenshot
 page here:

 http://forums.halcyoninc.com/showthread.php?p=1018

 I know this isn't the best time to be posting about legacy OpenSolaris:
 we're keeping our eyes on Solaris 11 Express / Illumos and aim to support
 the more advanced features of Solaris 11 the day it's pushed out the door.

 Thanks for your time!

 Regards,

 Mike dot Kirk at HalcyonInc dot com


I just tried this, and i'm getting an error on install.  I've also posted in
your forums but i thought perhaps someone else on list might know the
solutions.

anyways, I'm runniong Opensolaris b134, this is the error i receive

Seeding the new agent ...

ERROR:  Failed to run command /opt/Neuron/bin/na usm-seed -s xxx
agent. STDOUT/STDERR: /opt/Neuron/bin/na[1009]: eval: line 1: 6470: Memory
fault(coredump)

Moving log file /tmp/HALNeuronSolaris-install_20100820-29.log to
/var/opt/Neuron/install/HALNeuronSolaris-install_20100820-29.log ...




any help would be greatly appreciated, i really love the screenshots for
this software.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] make df have accurate out upon zfs?

2010-08-20 Thread Thomas Burgess
df serves a purpose though.

There are other commands which output that information..

On Thu, Aug 19, 2010 at 3:01 PM, Fred Liu fred_...@issi.com wrote:

 Not sure if there was similar threads in this list before.
 Three scenarios:
 1): df cannot count snapshot space in a file system with quota set.
 2): df cannot count sub-filesystem space in a file system with quota set.
 3): df cannot count space saved by de-dup in a file system with quota set.

 Are they possible?

 Btw, what is the difference between  /usr/gnu/bin/df and /bin/df?

 Thanks.

 Fred
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] make df have accurate out upon zfs?

2010-08-20 Thread Thomas Burgess
can't the zfs command provide that information?


2010/8/20 Fred Liu fred_...@issi.com

  Can you shed more lights on **other commands** which output that
 information?

 Appreciations.



 Fred



 *From:* Thomas Burgess [mailto:wonsl...@gmail.com]
 *Sent:* 星期五, 八月 20, 2010 17:34
 *To:* Fred Liu
 *Cc:* ZFS Discuss
 *Subject:* Re: [zfs-discuss] make df have accurate out upon zfs?



 df serves a purpose though.



 There are other commands which output that information..

 On Thu, Aug 19, 2010 at 3:01 PM, Fred Liu fred_...@issi.com wrote:

 Not sure if there was similar threads in this list before.
 Three scenarios:
 1): df cannot count snapshot space in a file system with quota set.
 2): df cannot count sub-filesystem space in a file system with quota set.
 3): df cannot count space saved by de-dup in a file system with quota set.

 Are they possible?

 Btw, what is the difference between  /usr/gnu/bin/df and /bin/df?

 Thanks.

 Fred
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] make df have accurate out upon zfs?

2010-08-20 Thread Thomas Burgess
as for the difference between the two df's, one is the gnu df (liek you'd
have on linux) and the other is the solaris df.



2010/8/20 Thomas Burgess wonsl...@gmail.com

 can't the zfs command provide that information?


 2010/8/20 Fred Liu fred_...@issi.com

  Can you shed more lights on **other commands** which output that
 information?

 Appreciations.



 Fred



 *From:* Thomas Burgess [mailto:wonsl...@gmail.com]
 *Sent:* 星期五, 八月 20, 2010 17:34
 *To:* Fred Liu
 *Cc:* ZFS Discuss
 *Subject:* Re: [zfs-discuss] make df have accurate out upon zfs?



 df serves a purpose though.



 There are other commands which output that information..

 On Thu, Aug 19, 2010 at 3:01 PM, Fred Liu fred_...@issi.com wrote:

 Not sure if there was similar threads in this list before.
 Three scenarios:
 1): df cannot count snapshot space in a file system with quota set.
 2): df cannot count sub-filesystem space in a file system with quota set.
 3): df cannot count space saved by de-dup in a file system with quota set.

 Are they possible?

 Btw, what is the difference between  /usr/gnu/bin/df and /bin/df?

 Thanks.

 Fred
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] make df have accurate out upon zfs?

2010-08-20 Thread Thomas Burgess
try something like

zfs list -o space

zfs -t snapshot

stuff like that

2010/8/20 Fred Liu fred_...@issi.com

  Sure, I know this.

 What I want to say is following:

 r...@cn03:~# /usr/gnu/bin/df -h /cn03/3

 FilesystemSize  Used Avail Use% Mounted on

 cn03/3298G  154K  298G   1% /cn03/3

 r...@cn03:~# /bin/df -h /cn03/3

 Filesystem size   used  avail capacity  Mounted on

 cn03/3 800G   154K   297G 1%/cn03/3



 r...@cn03:~# zfs get all cn03/3

 NAMEPROPERTY   VALUE  SOURCE

 cn03/3  type   filesystem -

 cn03/3  creation   Sat Jul 10  9:35 2010  -

 cn03/3  used   503G   -

 cn03/3  available  297G   -

 cn03/3  referenced 154K   -

 cn03/3  compressratio  1.00x  -

 cn03/3  mountedyes-

 cn03/3  quota  800G   local

 cn03/3  reservationnone   default

 cn03/3  recordsize 128K   default

 cn03/3  mountpoint /cn03/3default

 cn03/3  sharenfs   rw,root=nfsrootlocal

 cn03/3  checksum   on default

 cn03/3  compressionoffdefault

 cn03/3  atime  on default

 cn03/3  deviceson default

 cn03/3  exec   on default

 cn03/3  setuid on default

 cn03/3  readonly   offdefault

 cn03/3  zoned  offdefault

 cn03/3  snapdirhidden default

 cn03/3  aclmodegroupmask  default

 cn03/3  aclinherit restricted default

 cn03/3  canmount   on default

 cn03/3  shareiscsi offdefault

 cn03/3  xattr  on default

 cn03/3  copies 1  default

 cn03/3  version4  -

 cn03/3  utf8only   off-

 cn03/3  normalization  none   -

 cn03/3  casesensitivitysensitive  -

 cn03/3  vscan  offdefault

 cn03/3  nbmand offdefault

 cn03/3  sharesmb   offdefault

 cn03/3  refquota   none   default

 cn03/3  refreservation none   default

 cn03/3  primarycache   alldefault

 cn03/3  secondarycache alldefault

 cn03/3  usedbysnapshots46.8G  -

 cn03/3  usedbydataset  154K   -

 cn03/3  usedbychildren 456G   -

 cn03/3  usedbyrefreservation   0  -

 cn03/3  logbiaslatencydefault

 cn03/3  dedup  offdefault

 cn03/3  mlslabel   none   default

 cn03/3  com.sun:auto-snapshot  true   inherited from cn03



 Thanks.



 Fred



 *From:* Thomas Burgess [mailto:wonsl...@gmail.com]
 *Sent:* 星期五, 八月 20, 2010 18:44

 *To:* Fred Liu
 *Cc:* ZFS Discuss
 *Subject:* Re: [zfs-discuss] make df have accurate out upon zfs?



 as for the difference between the two df's, one is the gnu df (liek you'd
 have on linux) and the other is the solaris df.





 2010/8/20 Thomas Burgess wonsl...@gmail.com

 can't the zfs command provide that information?



 2010/8/20 Fred Liu fred_...@issi.com



 Can you shed more lights on **other commands** which output that
 information?

 Appreciations.



 Fred



 *From:* Thomas Burgess [mailto:wonsl...@gmail.com]
 *Sent:* 星期五, 八月 20, 2010 17:34
 *To:* Fred Liu
 *Cc:* ZFS Discuss
 *Subject:* Re: [zfs-discuss] make df have accurate out upon zfs?



 df serves a purpose though.



 There are other commands which output that information..

 On Thu, Aug 19, 2010 at 3:01 PM, Fred Liu fred_...@issi.com wrote:

 Not sure if there was similar threads in this list before.
 Three scenarios:
 1): df cannot count snapshot space in a file system with quota set.
 2): df cannot count sub-filesystem space in a file system with quota set.
 3): df cannot count space saved by de-dup in a file system with quota set.

 Are they possible?

 Btw, what is the difference between  /usr/gnu/bin/df and /bin/df?

 Thanks.

 Fred
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] lots of errors in logs?

2010-08-20 Thread Thomas Burgess
I've been running opensolaris for months, and today while poking around, i
noticed a ton of errors in my logs...I'm wondering what they mean and if
it's anything to worry about


I've found a few things on google but not a whole lotanyways, heres a
pastie of the log

http://pastie.org/1104916


any help would be greatly appreciated
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Thomas Burgess
On Mon, Aug 16, 2010 at 11:17 PM, Frank Cusack
frank+lists/z...@linetwo.netwrote:

 On 8/16/10 9:57 AM -0400 Ross Walker wrote:

 No, the only real issue is the license and I highly doubt Oracle will
 re-release ZFS under GPL to dilute it's competitive advantage.


 You're saying Oracle wants to keep zfs out of Linux?

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



why would Oracle want ZFS in linux when it makes the value of Solaris
greater?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Thomas Burgess
On Wed, Aug 11, 2010 at 12:57 AM, Peter Taps ptr...@yahoo.com wrote:

 Hi Eric,

 Thank you for your help. At least one part is clear now.

 I still am confused about how the system is still functional after one disk
 fails.

 Consider my earlier example of 3 disks zpool configured for raidz-1. To
 keep it simple let's not consider block sizes.

 Let's say I send a write value abcdef to the zpool.

 As the data gets striped, we will have 2 characters per disk.

 disk1 = ab + some parity info
 disk2 = cd + some parity info
 disk3 = ef + some parity info

 Now, if disk2 fails, I lost cd. How will I ever recover this? The parity
 info may tell me that something is bad but I don't see how my data will get
 recovered.

 The only good thing is that any newer data will now be striped over two
 disks.

 Perhaps I am missing some fundamental concept about raidz.

 Regards,
 Peter






I find the best way to understand how parity works is to think back to your
algebra class when you'd have something like

1x +2 = 3

and you could solve for xit's not EXACTLY like that but solving the
parity stuff is similar to solving for x







 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best usage of SSD-disk in ZFS system

2010-08-06 Thread Thomas Burgess
On Fri, Aug 6, 2010 at 6:44 AM, P-O Yliniemi p...@bsd-guide.net wrote:

  Hello!

 I have built a OpenSolaris / ZFS based storage system for one of our
 customers. The configuration is about this:

 Motherboard/CPU: SuperMicro X7SBE / Xeon (something, sorry - can't remember
 and do not have my specification nearby)
 RAM: 8GB ECC (X7SBE won't take more)
 Drives for storage: 16*1.5TB Seagate ST31500341AS, connected to two
 AOC-SAT2-MV8 controllers
 Drives for operating system: 2*80GB Intel X25-M (mirror)

 ZFS configuration: Two vdevs, raid-z of 7+1 disks per set, striped together
 (gives a zpool with about 21TB storage space)

 Disk performance: around 700-800MB/s, tested and timed with 'mkfile' and
 'time' (a 40GB file is created in just about a minute)
 I have a spare X25-M drive of 40GB to use for cache or log (or both), but
 since the disk array is a lot faster than the SSD-disk, I can not see the
 advantage in using it as a cache device.

 Is there any advantages for using a separate log or cache device in this
 case ?

 Regards,
  PeO

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


I can tell you for sure that there can be a really nice advantage for
sequential writes.

to see this yourself, do the following:

create a filesystem, share it out NFS
create a really big tar.gz file and put it in the filessytem

log in from a network client via nfs and extract the tar.ball using
something like:


time tar xzfv some.tar.gz


do this a few times to get an average, then add the SSD as a log device.

I have the exact same motherboard with a very similar setup, and i noticed a
400% nfs performance boost by doing this.


try it yourself =)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about consumer drives and zfs can someone help?

2010-07-23 Thread Thomas Burgess
I've found the Seagate 7200.12 1tb drives and Hitachi 7k2000 2TB drives to
be by far the best.

I've read lots of horror stories about any WD drive with 4k
sectorsit'sbest to stay away from them.

I've also read plenty of people say that the green drives are terrible.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] L2ARC and ZIL on same SSD?

2010-07-23 Thread Thomas Burgess
On Wed, Jul 21, 2010 at 12:42 PM, Orvar Korvar 
knatte_fnatte_tja...@yahoo.com wrote:

 Are there any drawbacks to partition a SSD in two parts and use L2ARC on
 one partition, and ZIL on the other? Any thoughts?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



It's not going to be as good as having separate but i can tell you that i
did this on my home system and it was WELL worth it.

I used one of the sandforce 1500 based SSD's 50 gb

i used 9 gb for ZIL, and the rest for L2ARC.   adding the zil gave me about
400-500% nfs write performance.   Seeing as you can't ever use more than
half your ram for ZIL anyways, the only real downside to doing this is that
i/o becomes split between zil and L2arc but realistically it depends on your
workloadfor mine, i noticed a HUGE benefit from doing this.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS performance?

2010-07-23 Thread Thomas Burgess
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie sigbj...@nixtra.com wrote:

 Hi,

 I've been searching around on the Internet to fine some help with this, but
 have been
 unsuccessfull so far.

 I have some performance issues with my file server. I have an OpenSolaris
 server with a Pentium D
 3GHz CPU, 4GB of memory, and a RAIDZ1 over 4 x Seagate (ST31500341AS) 1,5TB
 SATA drives.

 If I compile or even just unpack a tar.gz archive with source code (or any
 archive with lots of
 small files), on my Linux client onto a NFS mounted disk to the OpenSolaris
 server, it's extremely
 slow compared to unpacking this archive on the locally on the server. A
 22MB .tar.gz file
 containng 7360 files takes 9 minutes and 12seconds to unpack over NFS.

 Unpacking the same file locally on the server is just under 2 seconds.
 Between the server and
 client I have a gigabit network, which at the time of testing had no other
 significant load. My
 NFS mount options are: rw,hard,intr,nfsvers=3,tcp,sec=sys.

 Any suggestions to why this is?


 Regards,
 Sigbjorn


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




as someone else said, adding an ssd log device can help hugely.  I saw about
a 500% nfs write increase by doing this.
I've heard of people getting even more.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS performance?

2010-07-23 Thread Thomas Burgess
On Fri, Jul 23, 2010 at 5:00 AM, Sigbjorn Lie sigbj...@nixtra.com wrote:

 I see I have already received several replies, thanks to all!

 I would not like to risk losing any data, so I believe a ZIL device would
 be the way for me. I see
 these exists in different prices. Any reason why I would not buy a cheap
 one? Like the Intel X25-V
 SSD 40GB 2,5?

 What size of ZIL device would be recommened for my pool consisting for 4 x
 1,5TB drives? Any
 brands I should stay away from?



 Regards,
 Sigbjorn

 Like i said, i bought a 50 gb OCZ Vertex Limited Edition...it's like 200
dollars, up to 15,000 random iops (iops is what you want for fast zil)


I've gotten excelent performance out of it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-25 Thread Thomas Burgess


 Conclusion: This device will make an excellent slog device. I'll order
 them today ;)


I have one and i love it...I sliced it though, used 9 gb for ZIL and the
rest for L2ARC (my server is on a smallish network with about 10 clients)

It made a huge difference in NFS performance and other stuff as well (for
instance, doing something like du will run a TON faster than before)

For the money, it's a GREAT deal.  I am very impressed



 --Arne
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Thomas Burgess
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen pa...@iki.fi wrote:

 On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
  Well, I've searched my brains out and I can't seem to find a reason for
 this.
 
  I'm getting bad to medium performance with my new test storage device.
 I've got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using
 the Areca raid controller, the driver being arcmsr. Quad core AMD with 16
 gig of RAM OpenSolaris upgraded to snv_134.
 
  The zpool has 2 11-disk raidz2's and I'm getting anywhere between 1MB/sec
 to 40MB/sec with zpool iostat. On average, though it's more like 5MB/sec if
 I watch while I'm actively doing some r/w. I know that I should be getting
 better performance.
 

 How are you measuring the performance?
 Do you understand raidz2 with that big amount of disks in it will give you
 really poor random write performance?

 -- Pasi


i have a media server with 2 raidz2 vdevs 10 drives wide myself without a
ZIL (but with a 64 gb l2arc)

I can write to it about 400 MB/s over the network, and scrubs show 600 MB/s
but it really depends on the type of i/o you haverandom i/o across 2
vdevs will be REALLY slow (as slow as the slowest 2 drives in your pool
basically)

40 MB/s might be right if it's randomthough i'd still expect to see
more.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Thomas Burgess
On Fri, Jun 18, 2010 at 6:34 AM, Curtis E. Combs Jr. ceco...@uga.eduwrote:

 Oh! Yes. dedup. not compression, but dedup, yes.





dedup may be your problem...it requires some heavy ram and/or decent L2ARC
from what i've been reading.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool is wrong size in b134

2010-06-17 Thread Thomas Burgess



 Also, the disks were replaced one at a time last year from 73GB to 300GB to
 increase the size of the pool.  Any idea why the pool is showing up as the
 wrong size in b134 and have anything else to try?  I don't want to upgrade
 the pool version yet and then not be able to revert back...

 thanks,
 Ben

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



sometimes when you upgrade a pool by replacing drives with bigger ones, you
have to export the pool, then import it.

Or at least that's what i've always done
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] size of slog device

2010-06-14 Thread Thomas Burgess
On Mon, Jun 14, 2010 at 4:41 AM, Arne Jansen sensi...@gmx.net wrote:

 Hi,

 I known it's been discussed here more than once, and I read the
 Evil tuning guide, but I didn't find a definitive statement:

 There is absolutely no sense in having slog devices larger than
 then main memory, because it will never be used, right?
 ZFS will rather flush the txg to disk than reading back from
 zil?
 So there is a guideline to have enough slog to hold about 10
 seconds of zil, but the absolute maximum value is the size of
 main memory. Is this correct?




I thought it was half the size of memory.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reconfiguring a RAID-Z dataset

2010-06-12 Thread Thomas Burgess


  Yeah, this is what I was thinking too...

 Is there anyway to retain snapshot data this way? I've read about the ZFS
 replay/mirror features, but my impression was that this was more so for a
 development mirror for testing rather than a reliable backup? This is the
 only way I know of that one could do something like this. Is there some
 other way to create a solid clone, particularly with a machine that won't
 have the same drive configuration?




I recently used zfs send/recv to copy a bunch of datasets from a raidz2 box
to a box made on mirrors.  It works fine.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reconfiguring a RAID-Z dataset

2010-06-12 Thread Thomas Burgess
On Sun, Jun 13, 2010 at 12:18 AM, Joe Auty j...@netmusician.org wrote:

  Thomas Burgess wrote:


   Yeah, this is what I was thinking too...

 Is there anyway to retain snapshot data this way? I've read about the ZFS
 replay/mirror features, but my impression was that this was more so for a
 development mirror for testing rather than a reliable backup? This is the
 only way I know of that one could do something like this. Is there some
 other way to create a solid clone, particularly with a machine that won't
 have the same drive configuration?




  I recently used zfs send/recv to copy a bunch of datasets from a raidz2
 box to a box made on mirrors.  It works fine.


  ZFS send/recv looks very cool and very convenient. I wonder what it was
 that I read that suggested not relying on it for backups? Maybe this was
 alluding to the notion that like relying on RAID for a backup, if there is
 corruption your mirror (i.e. machine you are using with zfs recv) will be
 corrupted too?

 At any rate, thanks for answering this question! At some point if I go this
 route I'll test send and recv functionality to give all of this a dry run.






well, it's not considered to be an enterprise ready backup solution  I
think this is due to the fact that you can't recover a single file from a
zfs send stream but despite this limitation it's still VERY handy.

Another reason, from what i understand by reading this list, is that the
zfs send streams aren't resilient.  If you do not pipe it directly into a
zfs receive, it might get corrupted and be worthless(basically don't
save the output of zfs send and expect to receive it later)

again, this is not relevant if you are doing a zfs send into a zfs receive
at the other end

I think the 2 reasons i just gave are the reasons people have warned against
it...but still, it's damn amazing.





 --
 Joe Auty, NetMusician
 NetMusician helps musicians, bands and artists create beautiful,
 professional, custom designed, career-essential websites that are easy to
 maintain and to integrate with popular social networks.
 www.netmusician.org
 j...@netmusician.org


nmtwitter.png___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-26 Thread Thomas Burgess
On Wed, May 26, 2010 at 5:47 PM, Brandon High bh...@freaks.com wrote:

 On Sat, May 15, 2010 at 4:01 AM, Marc Bevand m.bev...@gmail.com wrote:
  I have done quite some research over the past few years on the best (ie.
  simple, robust, inexpensive, and performant) SATA/SAS controllers for
 ZFS.

 I've spent some time looking at the capabilities of a few controllers
 based on the questions about the SiI3124 and PMP support.

 According to the docs, the Marvell 88SX6081 driver doesn't support NCQ
 or PMP, though the card does. While I'm not really performance bound
 on my system, I imagine NCQ would help performance a bit, at least for
 scrubs or resilvers. Even more so because I'm using the slow WD10EADS
 drives.

 This raises the question of whether a SAS controller supports NCQ for
 sata drives. Would an LSI 1068e based controller? What about a LSI
 2008 based card?



If that is the chip on the AOC-SAT2-MV8 then i'm pretty sure it does suppoer
NCQ

I'm also pretty sure the LSI supports NCQ

I'm not 100% sure though
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-26 Thread Thomas Burgess
I thought it didI couldn't imagine sun using that chip in the original
thumper if it didn't suppoer NCQalso, i've read where people have had to
DISABLE ncq on this driver to fix one bug or another (as a work around)


On Wed, May 26, 2010 at 8:40 PM, Marty Faltesek
marty.falte...@oracle.comwrote:

 On Wed, 2010-05-26 at 17:18 -0700, Brandon High wrote:
   If that is the chip on the AOC-SAT2-MV8 then i'm pretty sure it does
  suppoer
   NCQ
 
  Not according to the driver documentation:
  http://docs.sun.com/app/docs/doc/819-2254/marvell88sx-7d
  In addition, the 88SX6081 device supports the SATA II Phase 1.0
  specification features, including SATA II 3.0 Gbps speed, SATA II Port
  Multiplier functionality and SATA II Port Selector. Currently the
  driver does not support native command queuing, port multiplier or
  port selector functionality.
 
  The driver source isn't available (or I couldn't find it) so it's not
  easy to confirm.

 marvell88sx does support NCQ.  This man page error was corrected in
 nevada build 138.

 Marty




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] question about zpool iostat output

2010-05-25 Thread Thomas Burgess
I was just wondering:

I added a SLOG/ZIL to my new system today...i noticed that the L2ARC shows
up under it's own headingbut the SLOG/ZIL doesn'tis this correct?


see:



   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
rpool   15.3G  44.2G  0  0  0  0
  c6t4d0s0  15.3G  44.2G  0  0  0  0
--  -  -  -  -  -  -
tank10.9T  7.22T  0  2.43K  0   300M
  raidz210.9T  7.22T  0  2.43K  0   300M
c4t6d0  -  -  0349  0  37.6M
c4t5d0  -  -  0350  0  37.6M
c5t7d0  -  -  0350  0  37.6M
c5t3d0  -  -  0350  0  37.6M
c8t0d0  -  -  0354  0  37.6M
c4t7d0  -  -  0351  0  37.6M
c4t3d0  -  -  0350  0  37.6M
c5t8d0  -  -  0349  0  37.6M
c5t0d0  -  -  0348  0  37.6M
c8t1d0  -  -  0353  0  37.6M
  c6t5d0s0  0  8.94G  0  0  0  0
cache   -  -  -  -  -  -
  c6t5d0s1  37.5G  0  0158  0  19.6M



It seems sort of strange to me that it doesn't look like this instead:






   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
rpool   15.3G  44.2G  0  0  0  0
  c6t4d0s0  15.3G  44.2G  0  0  0  0
--  -  -  -  -  -  -
tank10.9T  7.22T  0  2.43K  0   300M
  raidz210.9T  7.22T  0  2.43K  0   300M
c4t6d0  -  -  0349  0  37.6M
c4t5d0  -  -  0350  0  37.6M
c5t7d0  -  -  0350  0  37.6M
c5t3d0  -  -  0350  0  37.6M
c8t0d0  -  -  0354  0  37.6M
c4t7d0  -  -  0351  0  37.6M
c4t3d0  -  -  0350  0  37.6M
c5t8d0  -  -  0349  0  37.6M
c5t0d0  -  -  0348  0  37.6M
c8t1d0  -  -  0353  0  37.6M
log   -  -  -  -  -  -
  c6t5d0s0  0  8.94G  0  0  0  0
cache   -  -  -  -  -  -
  c6t5d0s1  37.5G  0  0158  0  19.6M
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB Flashdrive as SLOG?

2010-05-25 Thread Thomas Burgess
The last couple times i've read this questions, people normally responded
with:

It depends

you might not even NEED a slog, there is a script floating around which can
help determine that...

If you could benefit from one, it's going to be IOPS which help youso if
the usb drive has more iops than your pool configuration does, then it might
give some benefit.but then again, usb might not be as safe either, and
if an older version you may want to mirror it.


On Tue, May 25, 2010 at 8:11 AM, Kyle McDonald kmcdon...@egenera.comwrote:

 Hi,

 I know the general discussion is about flash SSD's connected through
 SATA/SAS or possibly PCI-E these days. So excuse me if I'm askign
 something that makes no sense...

 I have a server that can hold 6 U320 SCSI disks. Right now I put in 5
 300GB for a data pool, and 1 18GB for the root pool.

 I've been thinking lately that I'm not sure I like the root pool being
 unprotected, but I can't afford to give up another drive bay. So
 recently the idea occurred to me to go the other way. If I were to get 2
 USB Flash Thunb drives say 16 or 32 GB each, not only would i be able to
 mirror the root pool, but I'd also be able to put a 6th 300GB drive into
 the data pool.

 That led me to wonder whether partitioning out 8 or 12 GB on a 32GB
 thumb drive would be beneficial as an slog?? I bet the USB bus won't be
 as good as SATA or SAS, but will it be better than the internal ZIL on
 the U320 drives?

 This seems like at least a win-win, and possibly a win-win-win.
 Is there some other reason I'm insane to consider this?

  -Kyle


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about zpool iostat output

2010-05-25 Thread Thomas Burgess
i am running the last release from the genunix page

uname -a output:

SunOS wonslung-raidz2 5.11 snv_134 i86pc i386 i86pc Solaris


On Tue, May 25, 2010 at 10:33 AM, Cindy Swearingen 
cindy.swearin...@oracle.com wrote:

 Hi Thomas,

 This looks like a display bug. I'm seeing it too.

 Let me know which Solaris release you are running and
 I will file a bug.

 Thanks,

 Cindy


 On 05/25/10 01:42, Thomas Burgess wrote:

 I was just wondering:

 I added a SLOG/ZIL to my new system today...i noticed that the L2ARC shows
 up under it's own headingbut the SLOG/ZIL doesn'tis this correct?


 see:



   capacity operationsbandwidth
 poolalloc   free   read  write   read  write
 --  -  -  -  -  -  -
 rpool   15.3G  44.2G  0  0  0  0
  c6t4d0s0  15.3G  44.2G  0  0  0  0
 --  -  -  -  -  -  -
 tank10.9T  7.22T  0  2.43K  0   300M
  raidz210.9T  7.22T  0  2.43K  0   300M
c4t6d0  -  -  0349  0  37.6M
c4t5d0  -  -  0350  0  37.6M
c5t7d0  -  -  0350  0  37.6M
c5t3d0  -  -  0350  0  37.6M
c8t0d0  -  -  0354  0  37.6M
c4t7d0  -  -  0351  0  37.6M
c4t3d0  -  -  0350  0  37.6M
c5t8d0  -  -  0349  0  37.6M
c5t0d0  -  -  0348  0  37.6M
c8t1d0  -  -  0353  0  37.6M
  c6t5d0s0  0  8.94G  0  0  0  0
 cache   -  -  -  -  -  -
  c6t5d0s1  37.5G  0  0158  0  19.6M



 It seems sort of strange to me that it doesn't look like this instead:






   capacity operationsbandwidth
 poolalloc   free   read  write   read  write
 --  -  -  -  -  -  -
 rpool   15.3G  44.2G  0  0  0  0
  c6t4d0s0  15.3G  44.2G  0  0  0  0
 --  -  -  -  -  -  -
 tank10.9T  7.22T  0  2.43K  0   300M
  raidz210.9T  7.22T  0  2.43K  0   300M
c4t6d0  -  -  0349  0  37.6M
c4t5d0  -  -  0350  0  37.6M
c5t7d0  -  -  0350  0  37.6M
c5t3d0  -  -  0350  0  37.6M
c8t0d0  -  -  0354  0  37.6M
c4t7d0  -  -  0351  0  37.6M
c4t3d0  -  -  0350  0  37.6M
c5t8d0  -  -  0349  0  37.6M
c5t0d0  -  -  0348  0  37.6M
c8t1d0  -  -  0353  0  37.6M
 log   -  -  -  -  -  -
  c6t5d0s0  0  8.94G  0  0  0  0
 cache   -  -  -  -  -  -
  c6t5d0s1  37.5G  0  0158  0  19.6M






 


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-25 Thread Thomas Burgess
On Tue, May 25, 2010 at 11:27 AM, Edward Ned Harvey
solar...@nedharvey.comwrote:

  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Nicolas Williams
 
   I recently got a new SSD (ocz vertex LE 50gb)
  
   It seems to work really well as a ZIL performance wise.
   I know it doesn't have a supercap so lets' say dataloss
   occursis it just dataloss or is it pool loss?
 
  Just dataloss.

 WRONG!

 The correct answer depends on your version of solaris/opensolaris.  More
 specifically, it depends on the zpool version.  The latest fully updated
 sol10 and the latest opensolaris release (2009.06) only go up to zpool 14
 or
 15.  But in zpool 19 is when a ZIL loss doesn't permanently offline the
 whole pool.  I know this is available in the developer builds.

 The best answer to this, I think, is in the ZFS Best Practices Guide:
 (uggh, it's down right now, so I can't paste the link)

 If you have zpool 19, and you lose an unmirrored ZIL, then you lose your
 pool.  Also, as a configurable option apparently, I know on my systems, it
 also meant I needed to power cycle.

 If you have zpool =19, and you lose an unmirrored ZIL, then performance
 will be degraded, but everything continues to work as normal.

 Apparently the most common mode of failure for SSD's is also failure to
 read.  To make it worse, a ZIL is only read after system crash, which means
 the possibility of having a failed SSD undetected must be taken into
 consideration.  If you do discover a failed ZIL after crash, with zpool 19
 your pool is lost.  But with zpool =19 only the unplayed writes are lost.
 With zpool =19, your pool will be intact, but you would lose up to 30sec
 of
 writes that occurred just before the crash.


 I didn't ask about losing my zil.

I asked about power loss taking out my pool.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-25 Thread Thomas Burgess


 At least to me, this was not clearly not asking about losing zil and was
 not clearly asking about power loss.  Sorry for answering the question
 you
 thought you didn't ask.


I was only responding to your response of WRONG!!!   The guy wasn't wrong in
regards to my questions.  I'm sorry for not making THAT more clear in my
post.



 I would suggest clarifying your question, by saying instead:  so lets' say
 *power*loss occurs  Then it would have been clear what you were asking.


I'm pretty sure i did ask about power lossor at least it was implied by
my point about the UPS.  You're right, i probably should have been a little
more clear.


 Since this is a SSD you're talking about, unless you have enabled
 nonvolatile write cache on that disk (which you should never do), and the
 disk incorrectly handles cache flush commands (which it should never do),
 then the supercap is irrelevant.  All ZIL writes are to be done
 synchronously.

 This SSD doesn't use nonvolatile write cache (at least i don't think it
does, it's a SF-1500 based ssd)
I might be wrong about this, but i thought one of the biggest things about
the sandforce was that it doesn't use DRAM


 If you have a power loss, you don't lose your pool, and you also don't lose
 any writes in the ZIL.  You do, however, lose any async writes that were
 not
 yet flushed to disk.  There is no way to prevent that, regardless of ZIL
 configuration.

Yes, I know that i lose async writesi just wasn't sure if that resulted
in an issue...I might be somewhat confused to how the ZIL works but i
thought the point of the ZIL was to pretend a write actually happened when
it may not have actually been flushed to disk yet...in this case, a write to
the zil might not make it to diski just didn't know if this could result
in a loss of a pool due to some sort of corruption of the uberblock or
something.I'm not entirely up to speed on the voodoo that is ZFS.



I wasn't trying to be rude, sorry if it came off like that.

I am aware of the issue regarding removing the ZIL on non-dev versions of
opensolarisi am on b134 so that doesnt' apply to me.  Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-25 Thread Thomas Burgess
On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn 
bfrie...@simple.dallas.tx.us wrote:

 On Mon, 24 May 2010, Thomas Burgess wrote:


 It's a sandforce sf-1500 model but without a supercapheres some info
 on it:

 Maximum Performance

  *  Max Read: up to 270MB/s
  *  Max Write: up to 250MB/s
  *  Sustained Write: up to 235MB/s
  *  Random Write 4k: 15,000 IOPS
  *  Max 4k IOPS: 50,000


 Isn't there a serious problem with these specifications?  It seems that the
 minimum assured performance values (and the median) are much more
 interesting than some maximum performance value which might only be
 reached during a brief instant of the device lifetime under extremely ideal
 circumstances.  It seems that toilet paper may of much more practical use
 than these specifications.  In fact, I reject them as being specifications
 at all.

 The Apollo reentry vehicle was able to reach amazing speeds, but only for a
 single use.

 Bob

What exactly do you mean?
Every review i've read about this device has been great.  Every review i've
read about the sandforce controllers has been good toare you saying they
have shorter lifetimes?  Everything i've read has made them sound like they
should last longer than typical ssds because they write less actual data




 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-25 Thread Thomas Burgess
Also, let me note, it came with a 3 year warranty so I expect it to last at
least 3 years...but if it doesn't, i'll just return it under the warranty.


On Tue, May 25, 2010 at 1:26 PM, Thomas Burgess wonsl...@gmail.com wrote:



 On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn 
 bfrie...@simple.dallas.tx.us wrote:

 On Mon, 24 May 2010, Thomas Burgess wrote:


 It's a sandforce sf-1500 model but without a supercapheres some info
 on it:

 Maximum Performance

  *  Max Read: up to 270MB/s
  *  Max Write: up to 250MB/s
  *  Sustained Write: up to 235MB/s
  *  Random Write 4k: 15,000 IOPS
  *  Max 4k IOPS: 50,000


 Isn't there a serious problem with these specifications?  It seems that
 the minimum assured performance values (and the median) are much more
 interesting than some maximum performance value which might only be
 reached during a brief instant of the device lifetime under extremely ideal
 circumstances.  It seems that toilet paper may of much more practical use
 than these specifications.  In fact, I reject them as being specifications
 at all.

 The Apollo reentry vehicle was able to reach amazing speeds, but only for
 a single use.

 Bob

 What exactly do you mean?
 Every review i've read about this device has been great.  Every review i've
 read about the sandforce controllers has been good toare you saying they
 have shorter lifetimes?  Everything i've read has made them sound like they
 should last longer than typical ssds because they write less actual data




 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us,
 http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] questions about zil

2010-05-24 Thread Thomas Burgess
I recently got a new SSD (ocz vertex LE 50gb)

It seems to work really well as a ZIL performance wise.  My question is, how
safe is it?  I know it doesn't have a supercap so lets' say dataloss
occursis it just dataloss or is it pool loss?


also, does the fact that i have a UPS matter?


the numbers i'm seeing are really nicethese are some nfs tar times
before zil:


real 2m21.498s

user 0m5.756s

sys 0m8.690s


real 2m23.870s

user 0m5.756s

sys 0m8.739s



and these are the same ones after.




real 0m32.739s

user 0m5.708s

sys 0m8.515s



real 0m35.580s

user 0m5.707s

sys 0m8.526s




I also sliced iti have 16 gb ram so i used a 9 gb slice for zil and the
rest for L2ARC



this is for a single 10 drive raidz2 vdev so fari'm really impressed
with the performance gains
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-24 Thread Thomas Burgess


  ZFS is always consistent on-disk, by design. Loss of the ZIL will result
 in loss of the data in the ZIL which hasn't been flushed out to the hard
 drives, but otherwise, the data on the hard drives is consistent and
 uncorrupted.



 This is what i thought.  I have read this list on and off for awhile now
but i'm not a guruI see a lot of stuff about the intel ssd and disabling
the write cacheso i just wasn't sure...This is good news.






  It avoids the scenario of losing data in your ZIL due to power loss (and,
 of course, the rest of your system).  So, yes, if you actually care about
 your system, I'd recommend at least a minimal UPS to allow for quick
 shutdown after a power loss.


 yes, i have a nice little UPS.  I've tested it a few times and it seems to
work well.  It gives me about 20 minutes of power and can even send commands
via a script to shut down the system before the battery goes dry.




 That's going to pretty much be the best-case use for the ZIL - NFS writes
 being synchronous.  Of course, using the rest of the SSD for L2ARC is likely
 to be almost (if not more) helpful for performance for a wider variety of
 actions.


 yes, i have another machine without a zil (i bought a kingston 64 gb ssd on
sale and intended to try it as a zil but ultimately decided to just use it
as l2arc because of the performance numbers...)  but the l2arc helps a ton
for my uses.  I did slice this ssd...i used 9 gb for zil and the rest for
l2arc (about 36 gb)   I'm really impressed with this ssdfor only 160
dollars (180 - 20 mail in rebate) it's a killer deal.

it can do 235 MB/s sustained writes and has soemthing like 15,000 iops





 --
 Erik Trimble
 Java System Support
 Mailstop:  usca22-123
 Phone:  x17195
 Santa Clara, CA


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-24 Thread Thomas Burgess


 Not familiar with that model


It's a sandforce sf-1500 model but without a supercapheres some info on
it:



Maximum Performance

   - Max Read: up to 270MB/s
   - Max Write: up to 250MB/s
   - Sustained Write: up to 235MB/s
   - Random Write 4k: 15,000 IOPS
   - Max 4k IOPS: 50,000



per
http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/performance-enterprise-solid-state-drives/ocz-vertex-limited-edition-sata-ii-2-5--ssd.html




 Wow.  That's a pretty huge improvement. :-)

 - Garrett (newly of Nexenta)



yes, i love it.  I'm really impressed with this ssd for the money160 usd
(180 - 20 rebate)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New SSD options

2010-05-24 Thread Thomas Burgess



 From earlier in the thread, it sounds like none of the SF-1500 based
 drives even have a supercap, so it doesn't seem that they'd necessarily
 be a better choice than the SLC-based X-25E at this point unless you
 need more write IOPS...

 Ray


I think the upcoming OCZ Vertex 2 Pro will have a supercap.

I just bought a ocz vertex le, it doesn't have a supercap but it DOES have
some awesome specs otherwise..
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] confused

2010-05-23 Thread Thomas Burgess
did this come out?

http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/

i was googling trying to find info about the next release and ran across
this


Does this mean it's actually about to come out before the end of the month
or is this something else?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] confused

2010-05-23 Thread Thomas Burgess
never mindjust found more info on this...shoudl have held back from
asking


On Mon, May 24, 2010 at 1:26 AM, Thomas Burgess wonsl...@gmail.com wrote:

 did this come out?

 http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/

 i was googling trying to find info about the next release and ran across
 this


 Does this mean it's actually about to come out before the end of the month
 or is this something else?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-22 Thread Thomas Burgess
yah, unfortunately this is the first send.  i'm trying to send 9 TB of data.
 It really sucks because i was at 6 TB when it lost power

On Sat, May 22, 2010 at 2:34 AM, Brandon High bh...@freaks.com wrote:

 You can resume a send if the destination has a snapshot in common with
 the source. If you don't, there's nothing you can do.

 It probably taking a while to restart because the sends that were
 interrupted need to be rolled back.

 Sent from my Nexus One.

 On May 21, 2010 9:44 PM, Thomas Burgess wonsl...@gmail.com wrote:

 I can't tell you for sure

 For some reason the server lost power and it's taking forever to come back
 up.

 (i'm really not sure what happened)

 anyways, this leads me to my next couple questions:


 Is there any way to resume a zfs send/recv

 Why is it taking so long for the server to come up?
 it's stuck on Reading ZFS config

 and there is a FLURRY of hard drive lights blinking (all 10 in sync )





 On Sat, May 22, 2010 at 12:26 AM, Brandon High bh...@freaks.com wrote:
 
  On Fri, May 21, 201...


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HDD Serial numbers for ZFS

2010-05-22 Thread Thomas Burgess
install smartmontools


There is no package for it but it's EASY to install

once you do, you can get ouput like this:


pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.12 family
Device Model: ST31000528AS
Serial Number:6VP06FF5
Firmware Version: CC34
User Capacity:1,000,204,886,016 bytes
Device is:In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  ATA-8-ACS revision 4
Local Time is:Sat May 22 11:15:50 2010 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status:  (   0) The previous self-test routine
completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection:  ( 609) seconds.
Offline data collection
capabilities:  (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time:  (   1) minutes.
Extended self-test routine
recommended polling time:  ( 192) minutes.
Conveyance self-test routine
recommended polling time:  (   2) minutes.
SCT capabilities:(0x103f) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED
 WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x000f   113   099   006Pre-fail  Always
  -   55212722
  3 Spin_Up_Time0x0003   095   095   000Pre-fail  Always
  -   0
  4 Start_Stop_Count0x0032   100   100   020Old_age   Always
  -   132
  5 Reallocated_Sector_Ct   0x0033   100   100   036Pre-fail  Always
  -   1
  7 Seek_Error_Rate 0x000f   081   060   030Pre-fail  Always
  -   136183285
  9 Power_On_Hours  0x0032   091   091   000Old_age   Always
  -   7886
 10 Spin_Retry_Count0x0013   100   100   097Pre-fail  Always
  -   0
 12 Power_Cycle_Count   0x0032   100   100   020Old_age   Always
  -   132
183 Runtime_Bad_Block   0x   100   100   000Old_age   Offline
   -   0
184 End-to-End_Error0x0032   100   100   099Old_age   Always
  -   0
187 Reported_Uncorrect  0x0032   100   100   000Old_age   Always
  -   0
188 Command_Timeout 0x0032   100   100   000Old_age   Always
  -   0
189 High_Fly_Writes 0x003a   085   085   000Old_age   Always
  -   15
190 Airflow_Temperature_Cel 0x0022   063   054   045Old_age   Always
  -   37 (Lifetime Min/Max 32/40)
194 Temperature_Celsius 0x0022   037   046   000Old_age   Always
  -   37 (0 16 0 0)
195 Hardware_ECC_Recovered  0x001a   048   025   000Old_age   Always
  -   55212722
197 Current_Pending_Sector  0x0012   100   100   000Old_age   Always
  -   0
198 Offline_Uncorrectable   0x0010   100   100   000Old_age   Offline
   -   0
199 UDMA_CRC_Error_Count0x003e   200   200   000Old_age   Always
  -   0
240 Head_Flying_Hours   0x   100   253   000Old_age   Offline
   -   23691039612915
241 Total_LBAs_Written  0x   100   253   000Old_age   Offline
   -   263672243
242 Total_LBAs_Read 0x   100   253   000Old_age   Offline
   -   960644151

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
100  Not_testing
200  Not_testing
300  Not_testing
400  Not_testing
500  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


On Sat, May 22, 2010 at 3:09 AM, Andreas Iannou 
andreas_wants_the_w...@hotmail.com wrote:

  I 

Re: [zfs-discuss] HDD Serial numbers for ZFS

2010-05-22 Thread Thomas Burgess
i don't think there is but it's dirt simple to install.

I followed the instructions here:


http://cafenate.wordpress.com/2009/02/22/setting-up-smartmontools-on-opensolaris/



On Sat, May 22, 2010 at 3:19 AM, Andreas Iannou 
andreas_wants_the_w...@hotmail.com wrote:

  Thanks Thomas, I thought there'd already be a package in the repo for it.

 Cheers,
 Andre

 --
 Date: Sat, 22 May 2010 03:17:38 -0400
 Subject: Re: [zfs-discuss] HDD Serial numbers for ZFS
 From: wonsl...@gmail.com
 To: andreas_wants_the_w...@hotmail.com
 CC: zfs-discuss@opensolaris.org

 install smartmontoolsá


 There is no package for it but it's EASY to install

 once you do, you can get ouput like this:


  pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
 smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
 Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

 === START OF INFORMATION SECTION ===
 Model Family: á á Seagate Barracuda 7200.12 family
 Device Model: á á ST31000528AS
 Serial Number: á á6VP06FF5
 Firmware Version: CC34
 User Capacity: á á1,000,204,886,016 bytes
 Device is: á á á áIn smartctl database [for details use: -P show]
 ATA Version is: á 8
 ATA Standard is: áATA-8-ACS revision 4
 Local Time is: á áSat May 22 11:15:50 2010 EDT
 SMART support is: Available - device has SMART capability.
 SMART support is: Enabled

 === START OF READ SMART DATA SECTION ===
 SMART overall-health self-assessment test result: PASSED

 General SMART Values:
 Offline data collection status: á(0x82) Offline data collection activity
 was completed without error.
 Auto Offline Data Collection: Enabled.
 Self-test execution status: á á á( á 0) The previous self-test routine
 completed
 without error or no self-test has everá
 been run.
 Total time to complete Offlineá
 data collection: ( 609) seconds.
 Offline data collection
 capabilities: (0x7b) SMART execute Offline immediate.
 Auto Offline data collection on/off support.
 Suspend Offline collection upon new
 command.
 Offline surface scan supported.
 Self-test supported.
 Conveyance Self-test supported.
 Selective Self-test supported.
 SMART capabilities: á á á á á á(0x0003) Saves SMART data before entering
 power-saving mode.
 Supports SMART auto save timer.
 Error logging capability: á á á á(0x01) Error logging supported.
 General Purpose Logging supported.
 Short self-test routineá
 recommended polling time: ( á 1) minutes.
 Extended self-test routine
 recommended polling time: ( 192) minutes.
 Conveyance self-test routine
 recommended polling time: ( á 2) minutes.
 SCT capabilities: á á á (0x103f) SCT Status supported.
 SCT Feature Control supported.
 SCT Data Table supported.

 SMART Attributes Data Structure revision number: 10
 Vendor Specific SMART Attributes with Thresholds:
 ID# ATTRIBUTE_NAME á á á á áFLAG á á VALUE WORST THRESH TYPE á á áUPDATED
 áWHEN_FAILED RAW_VALUE
 áá1 Raw_Read_Error_Rate á á 0x000f á 113 á 099 á 006 á áPre-fail áAlways á
 á á - á á á 55212722
 áá3 Spin_Up_Time á á á á á á0x0003 á 095 á 095 á 000 á áPre-fail áAlways á
 á á - á á á 0
 áá4 Start_Stop_Count á á á á0x0032 á 100 á 100 á 020 á áOld_age á Always á
 á á - á á á 132
 áá5 Reallocated_Sector_Ct á 0x0033 á 100 á 100 á 036 á áPre-fail áAlways á
 á á - á á á 1
 áá7 Seek_Error_Rate á á á á 0x000f á 081 á 060 á 030 á áPre-fail áAlways á
 á á - á á á 136183285
 áá9 Power_On_Hours á á á á á0x0032 á 091 á 091 á 000 á áOld_age á Always á
 á á - á á á 7886
 á10 Spin_Retry_Count á á á á0x0013 á 100 á 100 á 097 á áPre-fail áAlways á
 á á - á á á 0
 á12 Power_Cycle_Count á á á 0x0032 á 100 á 100 á 020 á áOld_age á Always á
 á á - á á á 132
 183 Runtime_Bad_Block á á á 0x á 100 á 100 á 000 á áOld_age á Offline á
 á á- á á á 0
 184 End-to-End_Error á á á á0x0032 á 100 á 100 á 099 á áOld_age á Always á
 á á - á á á 0
 187 Reported_Uncorrect á á á0x0032 á 100 á 100 á 000 á áOld_age á Always á
 á á - á á á 0
 188 Command_Timeout á á á á 0x0032 á 100 á 100 á 000 á áOld_age á Always á
 á á - á á á 0
 189 High_Fly_Writes á á á á 0x003a á 085 á 085 á 000 á áOld_age á Always á
 á á - á á á 15
 190 Airflow_Temperature_Cel 0x0022 á 063 á 054 á 045 á áOld_age á Always á
 á á - á á á 37 (Lifetime Min/Max 32/40)
 194 Temperature_Celsius á á 0x0022 á 037 á 046 á 000 á áOld_age á Always á
 á á - á á á 37 (0 16 0 0)
 195 Hardware_ECC_Recovered á0x001a á 048 á 025 á 000 á áOld_age á Always á
 á á - á á á 55212722
 197 Current_Pending_Sector á0x0012 á 100 á 100 á 000 á áOld_age á Always á
 á á - á á á 0
 198 Offline_Uncorrectable á 0x0010 á 100 á 100 á 000 á áOld_age á Offline á
 á á- á á á 0
 199 UDMA_CRC_Error_Count á á0x003e á 200 á 200 á 000 á áOld_age á Always á
 á á - á á á 0
 240 Head_Flying_Hours á á á 0x á 100 á 253 á 000 á áOld_age á Offline á
 á á- á á á 23691039612915
 241 Total_LBAs_Written á á á0x á 100 á 253 á 000 á áOld_age á Offline á
 á á- á á á 263672243
 242 Total_LBAs_Read á á á á 0x á 100 á 253 á 000 á áOld_age á 

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-22 Thread Thomas Burgess
i only care about the most recent snapshot, as this is a growing video
collection.

i do have snapshots, but i only keep them for when/if i accidently delete
something, or rename something wrong.


On Sat, May 22, 2010 at 3:43 AM, Brandon High bh...@freaks.com wrote:

 On Fri, May 21, 2010 at 10:22 PM, Thomas Burgess wonsl...@gmail.com
 wrote:
  yah, it seems that rsync is faster for what i need anywaysat least
 right
  now...

 If you don't have snapshots you want to keep in the new copy, then
 probably...

 -B

 --
 Brandon High : bh...@freaks.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
If you install Opensolaris with the AHCI settings off, then switch them on,
it will fail to boot


I had to reinstall with the settings correct.

the best way to tell if ahci is working is to use cfgadm
if you see your drives there, ahci is on

if not, then you may need to reinstall with it on (for the rpool at least)


On Sat, May 22, 2010 at 4:43 PM, Brian broco...@vt.edu wrote:

 Is there a way within opensolaris to detect if AHCI is being used by
 various controllers?

 I suspect you may be accurate an AHCI is not turned on.  The bios for this
 particular motherboard is fairly confusing on the AHCI settings.  The only
 setting I have is actually in the raid section, and it seems to let select
 between IDE/AHCI/RAID as an option.  However, I can't tell if it applies
 only if one is using software RAID.

 If I set it to AHCI, another screen appears prior to boot that is titled
 AMD AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
 Is there a way from the grub menu to request opensolaris boot without the
 splashscreen, but instead boot with debug information printed to the
 console?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
just to make sure i understand what is going on here,

you have a rpool which is having performance issues, and you discovered ahci
was disabled?


you enabled it, and now it won't boot.  correct?

This happened to me and the solution was to export my storage pool and
reinstall my rpool with the ahci settings on.

Then i imported my storage pool and all was golden


On Sat, May 22, 2010 at 5:25 PM, Brian broco...@vt.edu wrote:

 Thanks -
   I can give reinstalling a shot.  Is there anything else I should do
 first?  Should I export my tank pool?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
This didn't work for me.  I had the exact same issue a few days ago.

My motherboard had the following:

Native IDE
AHCI
RAID
Legacy IDE

so naturally i chose AHCI, but it ALSO had a mode called IDE/SATA combined
mode

I thought i needed this to use both the ide and ant sata ports, turns out it
was basically an ide emulation mode for sata, long story short i ended up
with opensolaris installed in IDE mode.

I had to reinstall.  I tried the livecd/import method and it still failed to
boot.


On Sat, May 22, 2010 at 5:30 PM, Ian Collins i...@ianshome.com wrote:

 On 05/23/10 08:52 AM, Thomas Burgess wrote:

 If you install Opensolaris with the AHCI settings off, then switch them
 on, it will fail to boot


 I had to reinstall with the settings correct.

  Well you probably didn't have to.  Booting form the live CD and importing
 the pool would have put things right.

 --
 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
this old thread has info on how to switch from ide-sata mode


http://opensolaris.org/jive/thread.jspa?messageID=448758#448758




On Sat, May 22, 2010 at 5:32 PM, Ian Collins i...@ianshome.com wrote:

 On 05/23/10 08:43 AM, Brian wrote:

 Is there a way within opensolaris to detect if AHCI is being used by
 various controllers?

 I suspect you may be accurate an AHCI is not turned on.  The bios for this
 particular motherboard is fairly confusing on the AHCI settings.  The only
 setting I have is actually in the raid section, and it seems to let select
 between IDE/AHCI/RAID as an option.  However, I can't tell if it applies
 only if one is using software RAID.



 [answered in other post]


  If I set it to AHCI, another screen appears prior to boot that is titled
 AMD AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
 Is there a way from the grub menu to request opensolaris boot without the
 splashscreen, but instead boot with debug information printed to the
 console?



 Just hit a key once the bar is moving.

 --
 Ian.


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
GREAT, glad it worked for you!



On Sat, May 22, 2010 at 7:39 PM, Brian broco...@vt.edu wrote:

 Ok.  What worked for me was booting with the live CD and doing:

 pfexec zpool import -f rpool
 reboot

 After that I was able to boot with AHCI enabled.  The performance issues I
 was seeing are now also gone.  I am getting around 100 to 110 MB/s during a
 scrub.  Scrubs are completing in 20 minutes for 1TB of data rather than 1.2
 hours.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] snapshots send/recv

2010-05-22 Thread Thomas Burgess
I'm confusedI have a filesystem on server 1 called tank/nas/dump

I made a snapshot called first

zfs snapshot tank/nas/d...@first

then i did a zfs send/recv like:

zfs send tank/nas/d...@first | ssh wonsl...@192.168.1.xx /bin/pfexec
/usr/sbin/zfs recv tank/nas/dump


this worked fine, next today, i wanted to send what has changed

i did


zfs snapshot tank/nas/d...@second


now, heres where i'm confusedfrom reading the man page i thought this
command would work:


pfexec zfs send -i tank/nas/d...@first tank/nas/d...@second| ssh
wonsl...@192.168.1.15 /bin/pfexec /usr/sbin/zfs recv -vd tank/nas/dump



but i get an error:

cannot receive incremental stream: destination tank/nas/dump has been
modified
since most recent snapshot


why is this?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots send/recv

2010-05-22 Thread Thomas Burgess
On Sat, May 22, 2010 at 9:26 PM, Ian Collins i...@ianshome.com wrote:

 On 05/23/10 01:18 PM, Thomas Burgess wrote:


 this worked fine, next today, i wanted to send what has changed

 i did
 zfs snapshot tank/nas/d...@second

 now, heres where i'm confusedfrom reading the man page i thought this
 command would work:

 pfexec zfs send -i tank/nas/d...@first tank/nas/d...@second| ssh
 wonsl...@192.168.1.15 mailto:wonsl...@192.168.1.15 /bin/pfexec
 /usr/sbin/zfs recv -vd tank/nas/dump

  It should (you can shorten the first snap to first.


 but i get an error:

 cannot receive incremental stream: destination tank/nas/dump has been
 modified
 since most recent snapshot

  Well has it?  Even wandering around the filesystem with atime enabled
 will cause this error.

 Add -f to the receive to force a roll-back to the state after the original
 snap.

 Ahh, this i didn't know. Yes, i DID cd to the dir and check some stuff and
atime IS enabledthis is NOT very intuitive.

adding -F worked...thanks




 --

 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots send/recv

2010-05-22 Thread Thomas Burgess
ok, so forcing just basically makes it drop whatever changes were made

Thats what i was wondering...this is what i expected


On Sun, May 23, 2010 at 12:05 AM, Ian Collins i...@ianshome.com wrote:

 On 05/23/10 03:56 PM, Thomas Burgess wrote:

 let me ask a question though.

 Lets say i have a filesystem

 tank/something

 i make the snapshot

 tank/someth...@one

 i send/recv it

 then i do something (add a file...remove something, whatever) on the send
 side, then i do a send/recv and force it of the next filesystem

  What do you mean force it of the next filesystem?


  will the new recv'd filesystem be identical to the original forced
 snapshot or will it be a combination of the 2?


 The received filesystem will be identical to the sending one.

 --
 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/recv over ssh

2010-05-21 Thread Thomas Burgess
I seem to be getting decent speed with arcfour (this was what i was using to
begin with)

Thanks for all the helpthis honestly was just me being stupid...looking
back on yesterday, i can't even remember what i was doing wrong nowi was
REALLY tired when i asked this question.


On Fri, May 21, 2010 at 2:43 PM, Brandon High bh...@freaks.com wrote:

 On Fri, May 21, 2010 at 11:28 AM, David Dyer-Bennet d...@dd-b.net wrote:
  I thought I remembered a none cipher, but couldn't find it the other
  year and decided I must have been wrong.  I did use ssh-1, so maybe I
  really WAS remembering after all.

 It may have been in ssh2 as well, or at least the commercial version
 .. I thought it used to be a compile time option for openssh too.

  Seems a high price to pay to try to protect idiots from being idiots.
  Anybody who doesn't understand that encryption = none means it's not
  encrypted and hence not safe isn't safe as an admin anyway.

 Well, it won't expose your passwords since the key exchange it still
 encrypted ... That's good, right?

 Circling back to the original topic, you can use ssh to start up
 mbuffer on the remote side, then start the send. Something like:

 #!/bin/bash

 ssh -f r...@${recv_host} mbuffer -q -I ${SEND_HOST}:1234 | zfs recv
 puddle/tank
 sleep 1
 zfs send -R tank/foo/bar | mbuffer -O ${RECV_HOST}:1234


 When I was moving datasets between servers, I was on the console of
 both, so manually starting the send/recv was not a problem.

 I've tried doing it with netcat rather than mbuffer but it was
 painfully slow, probably due to network buffers. ncat (from the nmap
 devs) may be a suitable alternative, and can support ssl and
 certificate based auth.

 -B

 --
 Brandon High : bh...@freaks.com
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
 8:10:12:15:20
supported_max_cstates   0
vendor_id   AuthenticAMD

module: cpu_infoinstance: 7
name:   cpu_info7   class:misc
brand   AMD Opteron(tm) Processor 6128
cache_id7
chip_id 0
clock_MHz   2000
clog_id 7
core_id 7
cpu_typei386
crtime  9171.560266487
current_clock_Hz20
current_cstate  0
family  16
fpu_typei387 compatible
implementation  x86 (chipid 0x0 AuthenticAMD 100F91
family 16 model 9 step 1 clock 2000 MHz)
model   9
ncore_per_chip  8
ncpu_per_chip   8
pg_id   11
pkg_core_id 7
snaptime113230.737322698
socket_type G34
state   on-line
state_begin 1274377645
stepping1
supported_frequencies_Hz
 8:10:12:15:20
supported_max_cstates   0
vendor_id   AuthenticAMD


On Mon, May 17, 2010 at 5:55 PM, Dennis Clarke dcla...@blastwave.orgwrote:


 On 05-17-10, Thomas Burgess wonsl...@gmail.com wrote:
 psrinfo -pv shows:
 
 The physical processor has 8 virtual processors (0-7)
 x86  (AuthenticAMD 100F91 family 16 model 9 step 1 clock 200 MHz)
AMD Opteron(tm) Processor 6128   [  Socket: G34 ]
 

 That's odd.

 Please try this :

 # kstat -m cpu_info -c misc
 module: cpu_infoinstance: 0
 name:   cpu_info0   class:misc
brand   VIA Esther processor 1200MHz
cache_id0
chip_id 0
clock_MHz   1200
clog_id 0
core_id 0
cpu_typei386
crtime  3288.24125364
current_clock_Hz1199974847
current_cstate  0
family  6
fpu_typei387 compatible
implementation  x86 (CentaurHauls 6A9 family 6 model
 10 step 9 clock 1200 MHz)
model   10
ncore_per_chip  1
ncpu_per_chip   1
pg_id   -1
pkg_core_id 0
snaptime1526742.97169617
socket_type Unknown
state   on-line
state_begin 1272610247
stepping9
supported_frequencies_Hz1199974847
supported_max_cstates   0
vendor_id   CentaurHauls

 You should get a LOT more data.

 Dennis


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
Something i've been meaning to ask

I'm transfering some data from my older server to my newer one.  the older
server has a socket 775 intel Q9550 8 gb ddr2 800 20 1TB drives in raidz2 (3
vdevs, 2 with 7 drives one with 6) connected to 3 AOC-SAT2-MV8 cards spread
as evenly across them as i could

The new server is socket g34 based with the opteron 6128 8 core cpu with 16
gb ddr3 1333 ECC ram with 10 2TB drives (so far) in a single raidz2 vdev
connected to 3 LSI SAS3081E-R cards (flashed with IT firmware)

I'm sure this is due to something i don't understand, but durring zfs
send/recv from the old server to the new server (3 send/recv streams)  I'm
noticing the loadavg on the old server is much less than the new one

this is form top on the old server:

load averages:  1.58,  1.57,  1.37;   up 5+05:13:17
 04:52:42


and this is the newer server

load averages:  6.20,  5.98,  5.30;   up 1+05:03:02
 18:49:57




shouldn't the newer server have LESS load?

Please forgive my ubernoobness.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
is 3 zfs recv's random?



On Fri, May 21, 2010 at 10:03 PM, Brandon High bh...@freaks.com wrote:

 On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess wonsl...@gmail.com
 wrote:
  shouldn't the newer server have LESS load?
  Please forgive my ubernoobness.

 Depends on what it's doing!

 Load average is really how many process are waiting to run, so it's
 not always a useful metric. If there are processes waiting on disk,
 you can have high load with almost no cpu use. Check the iowait with
 iostat or top.

 You've got a pretty wide stripe, which isn't going to give the best
 performance, especially for random write workloads. Your old 3 vdev
 config will have better random write performance.

 Check to see what's using the CPU with top or prstat. prstat gives
 better info for threads, imo.

 -B

 --
 Brandon High : bh...@freaks.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
yeah, i'm aware of the performance aspects.  I use these servers as mostly
hd video servers for my house...they don't need to perform amazingly.  I
originally went with the setup on the old server because of everything i had
read about performance with wide stripes...in all honesty it performed
amazingly well, much more than i truly need...i plan to have 2 raidz2
stripes of 10 drives in this server (new one).

At most it will be serving 4-5 HD streams (mostly 720p mkv files, with some
1080p as well)

The older server can EASILY  max out 2  Gb/s links..i imagine the new server
will be able to do this as well...i think a scrub of the old server takes
4-5 hours.i'm not sure what this equates to in MB/s but its WAY more
than i ever really need.

This is what led me to use wider stripes in the new server, and i'm honestly
considering redoing the old server as well, if i switched to 2  wider
stripes instead of 3 i'd gain another TB or twofor my use i don't think
that would be a horrible thing.


On Fri, May 21, 2010 at 10:03 PM, Brandon High bh...@freaks.com wrote:

 On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess wonsl...@gmail.com
 wrote:
  shouldn't the newer server have LESS load?
  Please forgive my ubernoobness.

 Depends on what it's doing!

 Load average is really how many process are waiting to run, so it's
 not always a useful metric. If there are processes waiting on disk,
 you can have high load with almost no cpu use. Check the iowait with
 iostat or top.

 You've got a pretty wide stripe, which isn't going to give the best
 performance, especially for random write workloads. Your old 3 vdev
 config will have better random write performance.

 Check to see what's using the CPU with top or prstat. prstat gives
 better info for threads, imo.

 -B

 --
 Brandon High : bh...@freaks.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
I can't tell you for sure

For some reason the server lost power and it's taking forever to come back
up.

(i'm really not sure what happened)

anyways, this leads me to my next couple questions:


Is there any way to resume a zfs send/recv

Why is it taking so long for the server to come up?
it's stuck on Reading ZFS config

and there is a FLURRY of hard drive lights blinking (all 10 in sync )



On Sat, May 22, 2010 at 12:26 AM, Brandon High bh...@freaks.com wrote:

 On Fri, May 21, 2010 at 7:57 PM, Thomas Burgess wonsl...@gmail.com
 wrote:
  is 3 zfs recv's random?

 It might be. What do a few reports of 'iostat -xcn 30' look like?

 -B

 --
 Brandon High : bh...@freaks.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
yah, it seems that rsync is faster for what i need anywaysat least right
now...


On Sat, May 22, 2010 at 1:07 AM, Ian Collins i...@ianshome.com wrote:

 On 05/22/10 04:44 PM, Thomas Burgess wrote:

 I can't tell you for sure

 For some reason the server lost power and it's taking forever to come back
 up.

 (i'm really not sure what happened)

 anyways, this leads me to my next couple questions:


 Is there any way to resume a zfs send/recv

  Nope.


  Why is it taking so long for the server to come up?
 it's stuck on Reading ZFS config

 and there is a FLURRY of hard drive lights blinking (all 10 in sync )

  It's cleaning up the mess.  If you had a lot of data copied over, it'll
 take a while deleting it!

 --
 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
  0.24.21.5  13  17 c6t4d0
   55.72.0 3821.3   91.1  0.3  0.24.73.0   6  10 c6t5d0
   81.22.0 5866.7   91.2  0.2  0.41.95.2   5  14 c6t6d0
0.9  227.2   23.4 28545.1  4.7  0.6   20.42.8  63  64 c8t5d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t7d0
 cpu
 us sy wt id
 39 32  0 29
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 fd0
1.52.4   35.4   33.6  0.0  0.03.61.0   0   0 c8t1d0
  105.81.9 5560.1   95.5  0.3  0.32.72.9   8  16 c5t0d0
  109.62.5 5546.4   95.6  0.0  0.50.04.3   0  13 c4t0d0
  110.82.6 5504.7   95.4  0.3  0.32.22.6   7  15 c4t1d0
  104.62.4 5596.9   95.5  0.0  0.60.05.4   0  15 c5t1d0
  109.92.2 5522.1   86.1  0.2  0.32.02.5   7  14 c4t2d0
  104.61.9 5533.6   86.2  0.3  0.32.53.1   7  16 c5t2d0
  109.22.7 5498.4   86.1  0.2  0.32.12.4   7  14 c4t3d0
  105.32.9 5593.8   95.5  0.0  0.60.05.1   0  15 c5t3d0
   57.81.9 3938.4   90.7  0.2  0.13.51.5   6   9 c4t5d0
   50.82.3 3298.6   90.8  0.0  0.30.05.2   0   8 c5t4d0
  105.02.6 5541.2   86.1  0.4  0.23.71.4  11  15 c5t5d0
   90.82.3 6376.7   90.7  0.2  0.32.43.1   6  13 c5t6d0
   87.41.8 6085.2   90.6  0.0  0.50.05.4   0  13 c5t7d0
  104.22.4 5550.8   86.1  0.0  0.50.05.0   0  14 c6t0d0
  106.82.4 5543.6   95.5  0.0  0.60.05.5   0  16 c6t1d0
  105.42.5 5517.5   86.1  0.4  0.23.81.4  12  16 c6t2d0
  106.62.4 5569.1   95.6  0.0  0.50.05.0   0  15 c6t3d0
  107.22.2 5536.4   86.1  0.2  0.32.12.8   7  15 c6t4d0
   61.22.4 4085.2   90.7  0.0  0.30.05.4   0  10 c6t5d0
   70.31.8 5018.2   90.7  0.3  0.14.71.7   9  12 c6t6d0
0.8  203.3   12.3 25514.5  3.9  0.6   19.22.7  54  55 c8t5d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t7d0
 cpu
 us sy wt id
 38 30  0 32
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 fd0
2.22.5   64.2   35.2  0.0  0.03.30.9   0   0 c8t1d0
   98.63.1 5441.3  110.3  0.0  0.60.05.9   0  16 c5t0d0
  102.13.7 5392.7  110.2  0.0  0.50.04.3   0  13 c4t0d0
  104.13.3 5390.7  110.4  0.0  0.50.05.0   0  15 c4t1d0
   98.23.0 5437.3  110.2  0.0  0.50.05.1   0  14 c5t1d0
  104.73.8 5437.3  104.5  0.0  0.50.04.8   0  15 c4t2d0
   97.73.4 5481.1  104.6  0.0  0.60.06.0   0  16 c5t2d0
  103.13.4 5468.4  104.6  0.0  0.60.05.2   0  15 c4t3d0
   98.73.0 5415.2  110.3  0.0  0.50.05.1   0  14 c5t3d0
   55.73.1 3883.4   93.7  0.1  0.12.02.5   4   8 c4t5d0
   44.52.9 3141.2   93.6  0.0  0.30.05.5   0   7 c5t4d0
   99.23.3 5464.0  104.5  0.4  0.24.21.5  12  15 c5t5d0
   82.32.8 6119.3   93.4  0.0  0.50.06.4   0  14 c5t6d0
   75.22.7 5601.1   93.4  0.1  0.41.74.8   3  13 c5t7d0
   97.83.1 5458.8  104.5  0.0  0.50.05.2   0  14 c6t0d0
   99.23.2 5441.5  110.2  0.0  0.60.05.8   0  16 c6t1d0
   98.43.0 5475.7  104.6  0.3  0.43.03.5   8  17 c6t2d0
   99.83.0 5434.4  110.1  0.0  0.50.05.1   0  14 c6t3d0
  100.63.2 5453.9  104.6  0.0  0.60.05.5   0  15 c6t4d0
   54.93.0 3878.1   93.5  0.1  0.21.54.2   3   9 c6t5d0
   68.42.9 5128.3   93.5  0.2  0.33.14.2   6  13 c6t6d0
0.9  201.9   34.2 25338.0  3.8  0.5   18.92.6  51  52 c8t5d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t7d0


On Sat, May 22, 2010 at 12:26 AM, Brandon High bh...@freaks.com wrote:

 On Fri, May 21, 2010 at 7:57 PM, Thomas Burgess wonsl...@gmail.com
 wrote:
  is 3 zfs recv's random?

 It might be. What do a few reports of 'iostat -xcn 30' look like?

 -B

 --
 Brandon High : bh...@freaks.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
well it wasn't.

it was running pretty slow.

i had one really big filesystemwith rsync i'm able to do multiple
streams and it's moving much faster


On Sat, May 22, 2010 at 1:45 AM, Ian Collins i...@ianshome.com wrote:

 On 05/22/10 05:22 PM, Thomas Burgess wrote:

 yah, it seems that rsync is faster for what i need anywaysat least
 right now...

  ZFS send/receive should run at wire speed for a Gig-E link.

 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] send/recv over ssh

2010-05-20 Thread Thomas Burgess
I know i'm probably doing something REALLY stupid.but for some reason i
can't get send/recv to work over ssh.  I just built a new media server and
i'd like to move a few filesystem from my old server to my new server but
for some reason i keep getting strange errors...

At first i'd see something like this:


pfexec: can't get real path of ``/usr/bin/zfs''


or something like this:


zfs: Command not found


from google it's mentioned something about nfs but i've disabled autofs..

anyways, thanks for any helpi know it is just something stupid but my
brain just isn't working...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/recv over ssh

2010-05-20 Thread Thomas Burgess
also, i forgot to say:


one server is b133, the new one is b134



On Thu, May 20, 2010 at 4:23 PM, Thomas Burgess wonsl...@gmail.com wrote:

 I know i'm probably doing something REALLY stupid.but for some reason i
 can't get send/recv to work over ssh.  I just built a new media server and
 i'd like to move a few filesystem from my old server to my new server but
 for some reason i keep getting strange errors...

 At first i'd see something like this:


 pfexec: can't get real path of ``/usr/bin/zfs''


 or something like this:


 zfs: Command not found


 from google it's mentioned something about nfs but i've disabled autofs..

 anyways, thanks for any helpi know it is just something stupid but my
 brain just isn't working...


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool revcovery from replaced disks.

2010-05-18 Thread Thomas Burgess
wow, that's a truly excelent question.

If you COULD do it, it might work with a simple import

but i have no idea...i'd love to know myself.


On Tue, May 18, 2010 at 7:06 AM, Demian Phillips
demianphill...@gmail.comwrote:

 Is it possible to recover a pool (as it was) from a set of disks that
 were replaced during a capacity upgrade?
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-18 Thread Thomas Burgess
A really great alternative to the UIO cards for those who don't want the
headache of modifying the brackets or cases is the Intel SASUC8I
*
*
*
*
*This is a rebranded LSI SAS3081E-R*
*
*
*It can be flashed with the LSI IT firmware from the LSI website and is
physically identical to the LSI card.  It is really the exact same card, and
typically around 140-160 dollars.*
*
*
*These are what i went with.*
* *
On Tue, May 18, 2010 at 12:28 PM, Marc Bevand m.bev...@gmail.com wrote:

 Marc Nicholas geekything at gmail.com writes:
 
  Nice write-up, Marc.Aren't the SuperMicro cards their funny UIO form
  factor? Wouldn't want someone buying a card that won't work in a standard
  chassis.

 Yes, 4 or the 6 Supermicro cards are UIO cards. I added a warning about it.
 Thanks.

 -mrb

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-17 Thread Thomas Burgess
hey, when i do this single user boot, is there anyway to capture what pops
on the screen?  It's a LOT of stuff.


anyways, it seems to work fine when i do singleuser -srv

cpustat -h lists exactly what you said it should plus a lot more (though the
more is above, so like you said, it shows what it should show at the
bottom)


I'll capture all that later and post it.


On Sat, May 15, 2010 at 8:35 PM, Dennis Clarke dcla...@blastwave.orgwrote:

 - Original Message -
 From: Thomas Burgess wonsl...@gmail.com
 Date: Saturday, May 15, 2010 8:09 pm
 Subject: Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?
 To: Orvar Korvar knatte_fnatte_tja...@yahoo.com
 Cc: zfs-discuss@opensolaris.org


  Well i just wanted to let everyone know that preliminary results are
 good.
   The livecd booted, all important things seem to be recognized. It
  sees all
  16 gb of ram i installed and all 8 cores of my opteron 6128
 
  The only real shocker is how loud the norco RPC-4220 fans are (i have
  another machine with a norco 4020 case so i assumed the fans would be
  similar.this was a BAD assumption)  This thing sounds like a hair
  dryer
 
  Anyways, I'm running the install now so we'll see how that goes. It
  did take
  about 10 minutes to find a disk durring the installer, but if i
 remember
  right, this happened on other machines as well.
 

 Once you have the install done could you post ( somewhere ) what you see
 during a single user mode boot with options -srv ?

 I would like to see all the gory details.

 Also, could you run cpustat -h ?

 At the bottom, according to usr/src/uts/intel/pcbe/opteron_pcbe.c you shoud
 see :

 See BIOS and Kernel Developer's Guide (BKDG) For AMD Family 10h
 Processors (AMD publication 31116)

 The following registers should be listed :

  #defineAMD_FAMILY_10h_generic_events
 \
{ PAPI_tlb_dm,DC_dtlb_L1_miss_L2_miss,  0x7 },  \
{ PAPI_tlb_im,IC_itlb_L1_miss_L2_miss,  0x3 },  \
{ PAPI_l3_dcr,L3_read_req,  0xf1 }, \
{ PAPI_l3_icr,L3_read_req,  0xf2 }, \
{ PAPI_l3_tcr,L3_read_req,  0xf7 }, \
{ PAPI_l3_stm,L3_miss,  0xf4 }, \
{ PAPI_l3_ldm,L3_miss,  0xf3 }, \
{ PAPI_l3_tcm,L3_miss,  0xf7 }


 You should NOT see anything like this :

 r...@aequitas:/root# uname -a
 SunOS aequitas 5.11 snv_139 i86pc i386 i86pc Solaris
 r...@aequitas:/root# cpustat -h
 cpustat: cannot access performance counters - Operation not applicable


 ... as well as psrinfo -pv please ?


 When I get my HP Proliant with the 6174 procs I'll be sure to post whatever
 I see.

 Dennis

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-17 Thread Thomas Burgess
psrinfo -pv shows:


The physical processor has 8 virtual processors (0-7)
x86  (AuthenticAMD 100F91 family 16 model 9 step 1 clock 200 MHz)
   AMD Opteron(tm) Processor 6128   [  Socket: G34 ]




On Sat, May 15, 2010 at 8:35 PM, Dennis Clarke dcla...@blastwave.orgwrote:

 - Original Message -
 From: Thomas Burgess wonsl...@gmail.com
 Date: Saturday, May 15, 2010 8:09 pm
 Subject: Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?
 To: Orvar Korvar knatte_fnatte_tja...@yahoo.com
 Cc: zfs-discuss@opensolaris.org


  Well i just wanted to let everyone know that preliminary results are
 good.
   The livecd booted, all important things seem to be recognized. It
  sees all
  16 gb of ram i installed and all 8 cores of my opteron 6128
 
  The only real shocker is how loud the norco RPC-4220 fans are (i have
  another machine with a norco 4020 case so i assumed the fans would be
  similar.this was a BAD assumption)  This thing sounds like a hair
  dryer
 
  Anyways, I'm running the install now so we'll see how that goes. It
  did take
  about 10 minutes to find a disk durring the installer, but if i
 remember
  right, this happened on other machines as well.
 

 Once you have the install done could you post ( somewhere ) what you see
 during a single user mode boot with options -srv ?

 I would like to see all the gory details.

 Also, could you run cpustat -h ?

 At the bottom, according to usr/src/uts/intel/pcbe/opteron_pcbe.c you shoud
 see :

 See BIOS and Kernel Developer's Guide (BKDG) For AMD Family 10h
 Processors (AMD publication 31116)

 The following registers should be listed :

  #defineAMD_FAMILY_10h_generic_events
 \
{ PAPI_tlb_dm,DC_dtlb_L1_miss_L2_miss,  0x7 },  \
{ PAPI_tlb_im,IC_itlb_L1_miss_L2_miss,  0x3 },  \
{ PAPI_l3_dcr,L3_read_req,  0xf1 }, \
{ PAPI_l3_icr,L3_read_req,  0xf2 }, \
{ PAPI_l3_tcr,L3_read_req,  0xf7 }, \
{ PAPI_l3_stm,L3_miss,  0xf4 }, \
{ PAPI_l3_ldm,L3_miss,  0xf3 }, \
{ PAPI_l3_tcm,L3_miss,  0xf7 }


 You should NOT see anything like this :

 r...@aequitas:/root# uname -a
 SunOS aequitas 5.11 snv_139 i86pc i386 i86pc Solaris
 r...@aequitas:/root# cpustat -h
 cpustat: cannot access performance counters - Operation not applicable


 ... as well as psrinfo -pv please ?


 When I get my HP Proliant with the 6174 procs I'll be sure to post whatever
 I see.

 Dennis

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-17 Thread Thomas Burgess
no.it doesn't.  The only sata ports that show up are the ones connected
to the backpane via the reverse breakout sas cableand they show as
 emptyso i'm thinking that opensolaris isn't working with the chipset
sata.

In the bios i can select from:

Native IDE
AMD_AHCI
RAID
Legacy IDE


I have it set to AMD_AHCIbut my board also has an IDE slot which i was
using for the CDROM drive (this is what i used to load opensolaris in the
first place)

I also have an option called  Sate IDE combined mode

I think this may be my problem...i had this enabled, because i thought i
needed it in order to use both sata and idei think now it's something
else.


I'm going to try to boot without it on, if it doesn't work, i'll try to
reinstall with it disabled.



On Sun, May 16, 2010 at 8:18 PM, Ian Collins i...@ianshome.com wrote:

 On 05/17/10 12:08 PM, Thomas Burgess wrote:

 well, i haven't had a lot of time to work with this...but i'm having
 trouble getting the onboard sata to work in anything but NATIVE IDE mode.


 I'm not sure exactly what the problem isi'm wondering if i bought the
 wrong cable (i have a norco 4220 case so the drives connect via a sas
 sff-8087 on the backpane)

 I thought this required a reverse breakout cable but maybe i was
 wrongthis is the first time i've worked with sas

 on the otherhand, I was able to flash my intel Intel SASUC8I cards with
 the LSI SAS3081E IT firmware from the LSI site.  These seem to work fine.  I
 think i'm just going to order a 3rd card and put it in the pci-e x4 slot.  I
 don't want 16 drives running as sata and 4 running in IDE mode.Is there
 any way i can tell if the drive i installed opensolaris to is in IDE or SATA
 mode?

  Does it show up in cfgadm?

 --
 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-17 Thread Thomas Burgess
ok, well this was part of the problem.

I disabled the Sata IDE combined mode and reinstalled opensolaris (i tried
to just disable it but osol wouldn't boot)


now the drive connected to the SSD DOES show up in cfgadm so it seems to be
in sata mode...but the drives connected to the reverse breakout cable still
don't show up.

On the bright side, the drives connected to my SAS cards (through the same
backpane, with a standard sff-8087 to sff-8087 cable) DO show up.


S, now i just need to figure out why these 4 drives aren't showing up.

(my case is the norco RPC-4220, i thought i'd be ok with 2 SAS cards (8 sata
ports each) and then 4 of the onboard ports using the reverse breakout
cable.something must be wrong with the cablei'll test the 2 drives
connected directly in a biti have to take everything appart to do that)

On Mon, May 17, 2010 at 4:04 PM, Brandon High bh...@freaks.com wrote:

 On Mon, May 17, 2010 at 12:51 PM, Thomas Burgess wonsl...@gmail.com
 wrote:
  In the bios i can select from:
  Native IDE
  AMD_AHCI

 This is probably what you want. AHCI is supposed to be chipset agnostic.

  I also have an option called  Sate IDE combined mode

 See if there's anything in the docs about what this actually does. You
 might need it to use the PATA port, but it could be what's messing
 things up. If you can't use the cdrom, maybe install from a thumb
 drive or usb crdrom. (My ASUS M2N-LR board refuses to boot from a
 thumb drive. Likewise with a friend's Supermicro Intel board. Both
 work fine from a usb cdrom.)

  I think this may be my problem...i had this enabled, because i thought i
  needed it in order to use both sata and idei think now it's something
  else.

 I think so. It makes the first 4 ports look like IDE drives (two
 channels, two drives per channel) and the remaining BIOS RAID or AHCI.

 -B

 --
 Brandon High : bh...@freaks.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strategies for expanding storage area of home storage-server

2010-05-17 Thread Thomas Burgess
I'd have to agree.  Option 2 is probably the best.

I recently found myself in need of more space...i had to build an entirely
new server...my first one was close to full (it has 20 1TB drives in 3
raidz2 groups 7/7/6 and i was down to 3 TB)  I ended up going with a whole
new serverwith 2TB drives this time...I considered replacing the drives
in my current server with new 2 TB drives but for the money, it made more
sense to keep that server online and build a second

That's where i am now...If i could have done what you are looking to do, it
woudl have been a lot easier

On Mon, May 17, 2010 at 11:29 AM, Freddie Cash fjwc...@gmail.com wrote:

 On Mon, May 17, 2010 at 6:25 AM, Andreas Gunnarsson andr...@tiomat.netwrote:

 I've got a home-storage-server setup with Opensolaris (currently dev build
 134) that is quickly running out of storage space, and I'm looking through
 what kind of options I have for expanding it.

 I currently have my storage-pool in a 4x 1TB drive setup in RAIDZ1, and
 have room for 8-9 more drives in the case/controllers.
 Preferably I'd like to change it all to a RAIDZ2 with 12 drives, and 1
 hotspare, but that would require me to transfer out all the data to an
 external storage, and then recreating a new pool, which would require me
 buying some additional external storage that will not be used after I'm done
 with the transfer.

 I could also add 2 more 4 drive vdevs to the current pool, but then I
 would have 3 RAIDZ1 vdevs striped, and I'm not entirely sure that I'm
 comfortable with that level of protection on the data.

 Another version would be creating a 6 drive RAIDZ2 pool, moving the data
 to that one and the destroying the old pool and adding another 6 drive vdev
 to the new pool (striped).

 So the question is what would you recommend for growing my storage space:
 1. Buying extra hardware to copy the data to, and rebuild the pool as a 12
 drive RAIDZ2.
 2. Move data to a 6 drive RAIDZ2 and then destroy the old pool and stripe
 an additional RAIDZ2 vdevs.
 3. Stripe 2 additional RAIDZ1 4 drive vdevs.
 4. Something else.


 I'd go with option 2.

 Create a 6-drive raidz2 vdev in a separate pool.  Migrate the data from the
 old pool to the new pool.  Destroy the old pool.  Create a second 6-drive
 raidz2 vdev in the new pool.  Voila!  You'll have a lot of extra space, be
 able to withstand up to 4 drive failures (2 per vdev), and it should be
 faster as well (even with the added overhead of raidz2).

 Option 3 would give the best performance, but you don't have much leeway in
 terms of resilver time if using 1 TB+ drives, and if a second drive fails
 while the first is resilvering ...

 Option 1 would be horrible in terms of performance.  Especially resilver
 times, as you'll be thrashing 12 drives.

 --
 Freddie Cash
 fjwc...@gmail.com

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-16 Thread Thomas Burgess
well, i haven't had a lot of time to work with this...but i'm having trouble
getting the onboard sata to work in anything but NATIVE IDE mode.


I'm not sure exactly what the problem isi'm wondering if i bought the
wrong cable (i have a norco 4220 case so the drives connect via a sas
sff-8087 on the backpane)

I thought this required a reverse breakout cable but maybe i was
wrongthis is the first time i've worked with sas

on the otherhand, I was able to flash my intel Intel SASUC8I cards with the
LSI SAS3081E IT firmware from the LSI site.  These seem to work fine.  I
think i'm just going to order a 3rd card and put it in the pci-e x4 slot.  I
don't want 16 drives running as sata and 4 running in IDE mode.Is there
any way i can tell if the drive i installed opensolaris to is in IDE or SATA
mode?



On Thu, May 13, 2010 at 4:43 AM, Orvar Korvar 
knatte_fnatte_tja...@yahoo.com wrote:

 Great! Please report here so we can read about your impressions.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-15 Thread Thomas Burgess
Well i just wanted to let everyone know that preliminary results are good.
 The livecd booted, all important things seem to be recognized. It sees all
16 gb of ram i installed and all 8 cores of my opteron 6128

The only real shocker is how loud the norco RPC-4220 fans are (i have
another machine with a norco 4020 case so i assumed the fans would be
similar.this was a BAD assumption)  This thing sounds like a hair dryer

Anyways, I'm running the install now so we'll see how that goes. It did take
about 10 minutes to find a disk durring the installer, but if i remember
right, this happened on other machines as well.


On Thu, May 13, 2010 at 9:56 AM, Thomas Burgess wonsl...@gmail.com wrote:

 I ordered it.  It should be here monday or tuesday.  When i get everything
 built and installed, i'll report back.  I'm very excited.  I am not
 expecting problems now that i've talked to supermicro about it.  Solaris 10
 runs for them so i would imagine opensolaris should be fine too.

 On Thu, May 13, 2010 at 4:43 AM, Orvar Korvar 
 knatte_fnatte_tja...@yahoo.com wrote:

 Great! Please report here so we can read about your impressions.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS home server (Brandon High)

2010-05-15 Thread Thomas Burgess
The Intel SASUC8I Is a pretty good deal.  around 150 dollars for 8 sas/sata
channels.  This card is identical to the LSI SAS3081E-R for a lot less
money.  It doesn't come with cables, but this leaves you free to buy the
type you need (in my case, i needed SFF-8087 - SFF-8087 cables, some people
will need SFF-8087- 4 sata breakout cables...either way, cables run 12-20
dollars each (and each card needs 2) so you can tack that on to the
priceThese cards also work well with expanders.

They are based on LSI 1068e chip.


On Sat, May 15, 2010 at 1:41 PM, Roy Sigurd Karlsbakk r...@karlsbakk.netwrote:

 - Annika annika...@telia.com  skrev:

  I'm also about to set up a small home server. This little box
  http://www.fractal-design.com/?view=productprod=39
  is able housing six 3,5 hdd's and also has one 2,5 bay, eg for an
  ssd.
  Fine.
 
  I need to know which SATA controller cards (both PCI and PCI-E) are
  supported in OS, also I'd be grateful for tips on which ones to use in
  a
  non-pro environment.

 See http://www.sun.com/bigadmin/hcl/data/os/ for supported hardware. There
 was also a post in her yesterday or perhaps earlier today about the choice
 of SAS/SATA controllers. Most will do in a home server environment, though.
 AOC-SAT2-MV8 are great controllers, but run on PCI-X, which isn't very
 compatible with PCI Express

 Best regards

 roy
 --
 Roy Sigurd Karlsbakk
 (+47) 97542685
 r...@karlsbakk.net
 http://blogg.karlsbakk.net/
 --
 I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det
 er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
 idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate
 og relevante synonymer på norsk.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-13 Thread Thomas Burgess
I ordered it.  It should be here monday or tuesday.  When i get everything
built and installed, i'll report back.  I'm very excited.  I am not
expecting problems now that i've talked to supermicro about it.  Solaris 10
runs for them so i would imagine opensolaris should be fine too.

On Thu, May 13, 2010 at 4:43 AM, Orvar Korvar 
knatte_fnatte_tja...@yahoo.com wrote:

 Great! Please report here so we can read about your impressions.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-12 Thread Thomas Burgess
This is how i understand it.
I know the network cards are well supported and i know my storage cards are
supportedthe onboard sata may work and it may not.  If it does, great,
i'll use it for booting, if not, this board has 2 onboard bootable USB
sticksluckily usb seems to work regardless



On Wed, May 12, 2010 at 1:18 AM, Geoff Nordli geo...@gnaa.net wrote:



 On Behalf Of James C. McPherson
 Sent: Tuesday, May 11, 2010 5:41 PM
 
 On 12/05/10 10:32 AM, Michael DeMan wrote:
  I agree on the motherboard and peripheral chipset issue.
 
  This, and the last generation AMD quad/six core motherboards
   all seem to use the AMD SP56x0/SP5100 chipset, which I can't   find
 much
 information about support on for either OpenSolaris or FreeBSD.
 
 If you can get the device driver detection utility to run on it, that will
 give you a
 reasonable idea.
 
  Another issue is the LSI SAS2008 chipset for SAS controller
   which is frequently offered as an onboard option for many motherboards
  as
 well and still seems to be somewhat of a work in progress in   regards to
 being
 'production ready'.
 
 What metric are you using for production ready ?
 Are there features missing which you expect to see in the driver, or is it
 just oh
 noes, I haven't seen enough big customers with it ?
 
 

 I have been wondering what the compatibility is like on OpenSolaris.  My
 perception is basic network driver support is decent, but storage
 controllers are more difficult for driver support.

 My perception is if you are using external cards which you know work for
 networking and storage, then you should be alright.

 Am I out in left-field on this?

 Thanks,

 Geoff


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-12 Thread Thomas Burgess



 Now wait just a minute. You're casting aspersions on
 stuff here without saying what you're talking about,
 still less where you're getting your info from.

 Be specific - put up, or shut up.


I think he was just trying to tell me that my cpu should be fine, that the
only thing which i might have to worry about is network and disk drivers.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-11 Thread Thomas Burgess
the onboard sata is a secondary issue.  If i need to, i'll boot from the
oboard usb slots.  I have 2 LSI 1068e based sas controllers which i will be
using.


On Tue, May 11, 2010 at 8:40 PM, James C. McPherson j...@opensolaris.orgwrote:

 On 12/05/10 10:32 AM, Michael DeMan wrote:

 I agree on the motherboard and peripheral chipset issue.

 This, and the last generation AMD quad/six core motherboards

  all seem to use the AMD SP56x0/SP5100 chipset, which I can't
  find much information about support on for either OpenSolaris or FreeBSD.

 If you can get the device driver detection utility to run
 on it, that will give you a reasonable idea.


  Another issue is the LSI SAS2008 chipset for SAS controller

  which is frequently offered as an onboard option for many motherboards
  as well and still seems to be somewhat of a work in progress in
  regards to being 'production ready'.

 What metric are you using for production ready ?
 Are there features missing which you expect to see
 in the driver, or is it just oh noes, I haven't
 seen enough big customers with it ?


 James C. McPherson
 --
 Senior Software Engineer, Solaris
 Oracle
 http://www.jmcp.homeunix.com/blog

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-11 Thread Thomas Burgess
Well i went ahead and ordered the board.  I will report back soon with the
results..i'm pretty excited.  These CPU's seem great on paper.


On Tue, May 11, 2010 at 9:02 PM, Thomas Burgess wonsl...@gmail.com wrote:

 the onboard sata is a secondary issue.  If i need to, i'll boot from the
 oboard usb slots.  I have 2 LSI 1068e based sas controllers which i will be
 using.


 On Tue, May 11, 2010 at 8:40 PM, James C. McPherson 
 j...@opensolaris.orgwrote:

 On 12/05/10 10:32 AM, Michael DeMan wrote:

 I agree on the motherboard and peripheral chipset issue.

 This, and the last generation AMD quad/six core motherboards

  all seem to use the AMD SP56x0/SP5100 chipset, which I can't
  find much information about support on for either OpenSolaris or
 FreeBSD.

 If you can get the device driver detection utility to run
 on it, that will give you a reasonable idea.


  Another issue is the LSI SAS2008 chipset for SAS controller

  which is frequently offered as an onboard option for many motherboards
  as well and still seems to be somewhat of a work in progress in
  regards to being 'production ready'.

 What metric are you using for production ready ?
 Are there features missing which you expect to see
 in the driver, or is it just oh noes, I haven't
 seen enough big customers with it ?


 James C. McPherson
 --
 Senior Software Engineer, Solaris
 Oracle
 http://www.jmcp.homeunix.com/blog

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] asus pike slot

2010-05-08 Thread Thomas Burgess
I was wondering if anyone had any first hand knowledge of compatibility with
any asus pike slot expansion cards and OpenSolaris.


I would guess this should work:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042

because it's based on lsi 1068e but i'm currious if anyone knows for sure.
 Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] When to Scrub..... ZFS That Is

2010-03-13 Thread Thomas Burgess
I scrub once a week.

I think the general rule is:

once a week for consumer grade drives
once a month for enterprise grade drives.


On Sat, Mar 13, 2010 at 3:29 PM, Tony MacDoodle tpsdoo...@gmail.com wrote:

 When would it be necessary to scrub a ZFS filesystem?
 We have many rpool, datapool, and a NAS 7130, would you consider to
 schedule monthly scrubs at off-peak hours or is it really necessary?

 Thanks

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware for high-end ZFS NAS file server - 2010 March edition

2010-03-04 Thread Thomas Burgess
On Thu, Mar 4, 2010 at 4:46 AM, Dan Dascalescu 
bigbang7+opensola...@gmail.com bigbang7%2bopensola...@gmail.com wrote:

 Please recommend your up-to-date high-end hardware components for building
 a highly fault-tolerant ZFS NAS file server.

 I've seen various hardware lists online (and I've summarized them at
 http://wiki.dandascalescu.com/reviews/storage.edit#Solutions), but they're
 on the cheapo side. I want to build a media server and be done with with for
 a few years, (until the next generation storage media (holograms?
 nanowires?) becomes commercially available. So please, knock yourselves out.
 The bills for ZFS NAS boxes that I've seen run around $1k, and I'm willing
 to invest up to $3k.

 Requirements, in decreasing order of importance:
 1. Extremely fault-tolerant.  I'd like to be able to lose two disks and
 still be OK. I also want any silent hard disk read errors that are detected
 by ZFS, to be reported somehow.
 2. As quiet as it gets.
 3. Able to easily extend storage
 4. (low) If feasible, I'd like to be able to use a Blu-Ray drive with the
 system.

 I also have a few software requirements, which I think are pretty
 independent of the hardware:
 a. Secure – I want to be able to tweak and control access at every level
 b. Very fast network performance. The server should be able to stream 1080p
 while doing a number of other tasks without issues.
 c. Ability to serve all different types of hosts: NFS, SMB, SCP/SFTP
 d. Flexible. I do a number of other things/experiments, and I’d like to be
 able to use it for more than just serving files.

 Really only #1 (reliable) and #2 (quiet) matter most. I've been mulling
 over this server for too long and want to get it over with.

 Looking forward to your recommendations,
 Dan


What i did was this:

I got a norco 4020 (the 4220 is good too)

Both of those cost around 300-350 dolars.  That is a 4u case with 20 hot
swap bays.

Then i got a decent server board.  I used supermicro mbd-x7se because it has
4 pci-x slots.

I got 3 supermicro AOC-SAT2-mv8 cards  for the sata ports (each has 8)

20 1tb seagate drives, but you could use any size which fits your budget.

8 gb ddr2 800 ecc memory

3 64 gb ssd's (2 for rpool mirror and 1 for l2arc)

Intel q9550 cpu.

This gives you a pretty beastly machine which has around 18-36 raw TB's

I went with 3 raidz2 groups.

I plan to expland it with a sas expander and another norco case.  I hope
this gives you some ideas.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Non-redundant zpool behavior?

2010-03-04 Thread Thomas Burgess
no, if you don't use redundancy, each disk you add makes the pool that much
more likely to fair.  This is the entire point of raidz .

ZFS stripes data across all vdevs.

On Thu, Mar 4, 2010 at 12:32 PM, Travis Tabbal tra...@tabbal.net wrote:

 I have a small stack of disks that I was considering putting in a box to
 build a backup server. It would only store data that is duplicated
 elsewhere, so I wouldn't really need redundancy at the disk layer. The
 biggest issue is that the disks are not all the same size. So I can't really
 do a raidz or mirror with them anyway. So I was considering just putting
 them all in one pool. My question is how does zpool behave if I lose one
 disk in this pool? Can I still access the data on the other disks? Or is it
 like a traditional raid0 and I lose the whole pool? Is there a better way to
 deal with this, using my old mismatched hardware?

 Yes, I could probably build a raidz by partitioning and such, but I'd like
 to avoid the complexity. I'd probably just use zfs send/recv to send
 snapshots over or perhaps crashplan.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware for high-end ZFS NAS file server - 2010 March edition

2010-03-04 Thread Thomas Burgess
its not quiet by default but it can be made somewhat more quiet by swapping
out the fans or going to larger fans.  Its still totally worth it.

I use smaller, silent htpc's for the actual media and connect to the norco
over gigabit.

My norco box is connected to the network with 2 link aggregated gigabit
ethernet cables.

It's very nice.


On Thu, Mar 4, 2010 at 3:03 PM, Michael Shadle mike...@gmail.com wrote:

 On Thu, Mar 4, 2010 at 4:12 AM, Thomas Burgess wonsl...@gmail.com wrote:

  I got a norco 4020 (the 4220 is good too)
 
  Both of those cost around 300-350 dolars.  That is a 4u case with 20 hot
  swap bays.

 Typically rackmounts are not designed for quiet. He said quietness is
 #2 in his priorities...

 Or does the Norco unit perform quietly or have the ability to be quieter?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware for high-end ZFS NAS file server - 2010 March edition

2010-03-04 Thread Thomas Burgess
yah, i can dig it.  I'd be really upset if i couldn't use my rackmount
stuff.  I love my norco box.  I'm about to build a second one using a sas
expander...but i can totally understand how noise would be a concern

at the same time, it's not NEARLY as loud as something like an ac window
unit.



On Thu, Mar 4, 2010 at 3:27 PM, Michael Shadle mike...@gmail.com wrote:

 If I had a decently ventilated closet or space to do it in I wouldn't
 mind noise, but I don't, that's why I had to build my storage machines
 the way I did.

 On Thu, Mar 4, 2010 at 12:23 PM, Thomas Burgess wonsl...@gmail.com
 wrote:
  its not quiet by default but it can be made somewhat more quiet by
 swapping
  out the fans or going to larger fans.  Its still totally worth it.
 
  I use smaller, silent htpc's for the actual media and connect to the
 norco
  over gigabit.
 
  My norco box is connected to the network with 2 link aggregated gigabit
  ethernet cables.
 
  It's very nice.
 
 
  On Thu, Mar 4, 2010 at 3:03 PM, Michael Shadle mike...@gmail.com
 wrote:
 
  On Thu, Mar 4, 2010 at 4:12 AM, Thomas Burgess wonsl...@gmail.com
 wrote:
 
   I got a norco 4020 (the 4220 is good too)
  
   Both of those cost around 300-350 dolars.  That is a 4u case with 20
 hot
   swap bays.
 
  Typically rackmounts are not designed for quiet. He said quietness is
  #2 in his priorities...
 
  Or does the Norco unit perform quietly or have the ability to be
 quieter?
 
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's the advantage of using multiple filesystems in a pool

2010-03-01 Thread Thomas Burgess
Also consider that you might not want to snapshot the entire pool.

For instance, if you have a media server, you may have a dump dir and a
torrent dir, you probably wouldn 't want to snapshot this because it changes
a lot and the snapshots could grow very large (or you may wish to snapshot
it but only for a couple days back)

Where you may wish to keep snapshots going back way further on your other
filesystems.

You may also want to use dedup on some filesystems and not others.  I also
keep separate filesystems for different uses on my network and then use
different ACL's and cifs mounts, where i have a couple filesystems which are
read write to everyone, and others which are read only and then each user
has thier own share which is only for themFor this type of stuff, it's
cool.


On Sun, Feb 28, 2010 at 10:24 PM, tomwaters tomwat...@chadmail.com wrote:

 Hi guys, on my home server I have a variety of directories under a single
 pool/filesystem, Cloud.

 Things like
 cloud/movies  - 4TB
 cloud/music - 100Gig
 cloud/winbackups  - 1TB
 cloud/data   - 1TB

 etc.

 After doing some reading, I see recomendations to have separate filesystem
 to improve performance...but not sure how as it's the same pool?

 Can someone help me understand if/why I should use separate file systems
 for these?

 ta.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-03-01 Thread Thomas Burgess
There may be some things we choose not to open source going forward,
similar to how MySQL manages certain value-add[s] at the top of the stack,
Roberts said. It's important to understand the plan now is to deliver value
again out of our IP investment, while at the same time measuring that with
continuing to deliver OpenSolaris in the open.


 This will be a balancing act, one that we'll get right sometimes, but may
 not always.

 -
 From the feedback data I've seen customers dislike this type of licensing
 model most.  Dan may or may not be reading this, but I'd strongly discourage
 this approach.  Without knowing more I don't know what alternative I could
 recommend though.. (Too bad I missed that irc meeting..)

 ./C

 I may be wrong, but isn't this already what they do?  I mean, there is a
bunch of proprietary stuff in solaris that didn't make it into opensolaris.
I thought this was how they did things anyways, or am i misunderstanding
something.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's the advantage of using multiple filesystems in a pool

2010-03-01 Thread Thomas Burgess
On Mon, Mar 1, 2010 at 12:44 PM, Richard Elling richard.ell...@gmail.comwrote:

 On Mar 1, 2010, at 7:42 AM, Thomas Burgess wrote:

  Also consider that you might not want to snapshot the entire pool.

 Snapshots work on the dataset, not the pool (there is no zpool snapshot
 command :-)

 This is my entire point.  Somehow it must have been missed due to me not
using my words properly.

The OP asked what is the advantage of using separate filesystems instead of
just one big filesystem

My point is you may want to snapshot SOME stuff but not other stuff.

Even if there WAS a snapshot pool function.



 What usually trips me up is the auto-snapshot service and inherited
 properties.
 You will want to make sure and not snapshot those file systems which are
 temporary in nature.  You can do this with the advanced options section
 of the Time Slider Manager GUI, or by setting the com.sun:auto-snapshot
 parameter appropriately on your datasets.
  -- richard

 yes, this is very annoying when this happens =)




 ZFS storage and performance consulting at http://www.RichardElling.com
 ZFS training on deduplication, NexentaStor, and NAS performance
 http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Consolidating a huge stack of DVDs using ZFS dedup: automation?

2010-03-01 Thread Thomas Burgess
On Mon, Mar 1, 2010 at 11:48 PM, valrh...@gmail.com valrh...@gmail.comwrote:

 One of the most useful things I've found with ZFS dedup (way to go Jeff
 Bonwick and Co.!) is the ability to consolidate backups. I had six different
 complete backups of all of my files spread out over various hard drives, and
 dedup allowed me to consolidate them into something that took less twice the
 space of the original. I was thrilled when I saw this the first time.

 This led me to another idea: I have been using DVDs for small backups here
 and there for a decade now, and have a huge pile of several hundred. They
 have a lot of overlapping content, so I was thinking of feeding the entire
 stack into some sort of DVD autoloader, which would just read each disk, and
 write its contents to a ZFS filesystem with dedup enabled. Even if the
 autoloader had to run on Windows or Linux, I could just use a mounted drive
 to achieve the same ends. That would allow me to consolidate a few hundred
 CDs and DVDs onto probably a terabyte or so, which could then be kept
 conveniently on a hard drive and archived to tape. Does anyone know of a DVD
 autoloader that would allow me to do this easily, and if someone might be
 willing to rent one to me (I'm in the Boston area)? I only need to do this
 once.
 --



This would be a kick ass project to try to make with spare parts.  I might
even try it now that you bring it up.


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] upgrading ZFS tools in opensolaris.com

2010-02-26 Thread Thomas Burgess
you can use one of the livecd's from genunix.


On Fri, Feb 26, 2010 at 3:36 AM, Laurence laure...@mangafish.net wrote:

 I'm probably getting this all wrong, but basically OpenSolaris 2009.6
 (which is the latest ISO available iirc) ships with snv 111b.
 My problem is I have a borked zpool and could really use PSARC 2009/479 to
 fix it. The problem is PSARC 2009/479 was only built recently and
 subsequently was released for solaris_nevada(snv_128).

 Is there a safe way of brining snv 128 to OpenSolaris?

 PSARC 2009/479 details:
 http://bugs.opensolaris.org/view_bug.do?bug_id=6667683
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage system with 72 GB memory constantly has 11 GB free memory

2010-02-26 Thread Thomas Burgess
i thouhgt it was designed to use 2/3's of the available memory


On Fri, Feb 26, 2010 at 8:46 AM, Ronny Egner ronnyeg...@gmx.de wrote:

 Dear All,

 our storage system running opensolaris b133 + ZFS has a lot of memory for
 caching. 72 GB total. While testing we observed free memory never falls
 below 11 GB.

 Even if we create a ram disk free memory drops below 11 GB but will be 11
 GB shortly after (i assume ARC cache is shrunken in this context).

 As far as i know ZFS is designed to use all memory except 1 GB for
 caching



 Thanks in advance
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage system with 72 GB memory constantly has 11 GB free memory

2010-02-26 Thread Thomas Burgess
errr, i mean 3/4...i know it's some fraction anyways


On Fri, Feb 26, 2010 at 8:49 AM, Thomas Burgess wonsl...@gmail.com wrote:

 i thouhgt it was designed to use 2/3's of the available memory



 On Fri, Feb 26, 2010 at 8:46 AM, Ronny Egner ronnyeg...@gmx.de wrote:

 Dear All,

 our storage system running opensolaris b133 + ZFS has a lot of memory for
 caching. 72 GB total. While testing we observed free memory never falls
 below 11 GB.

 Even if we create a ram disk free memory drops below 11 GB but will be 11
 GB shortly after (i assume ARC cache is shrunken in this context).

 As far as i know ZFS is designed to use all memory except 1 GB for
 caching



 Thanks in advance
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-02-26 Thread Thomas Burgess
I think most people are just confused by ACL's, i know i was when i first
started using them.  Having said that, once i got them set correctly, they
work very well for my CIFS shares.


On Fri, Feb 26, 2010 at 11:23 AM, Paul B. Henson hen...@acm.org wrote:

 On Fri, 26 Feb 2010, Darren J Moffat wrote:

  Anyone sharing files over CIFS backed by ZFS is using ACLs, particularly
  when there are only Windows clients.  There are large number and some
  very significant in size deployments.

 If you're running the opensolaris in-kernel CIFS server, you avoid the
 POSIX compatibility layer and zfs does actually work in a pure ACL fashion.
 OTOH, under Solaris 10, I was unable to find a samba configuration that
 didn't result in some files being hit by a chmod and losing their ACL.

  I doubt it is something people tend to talk about or publish blogs etc
  on.  That is probably the main reason you can't find them.

 It's not like I'm typing People who use ZFS ACL's into google and nothing
 pops up, I'm inquiring in various forums generally populated by Solaris
 using people, in which typically a Hey, who uses foo? post finds a fair
 number of respondents. Given the dearth of responses, I can only conclude
 their use is not very widespread. The most frequent response so far has
 been along the lines of ACL's suck. I wish they weren't there 8-/.

 So far it's been quite a struggle to deploy ACL's on an enterprise central
 file services platform with access via multiple protocols and have them
 actually be functional and reliable. I can see why the average consumer
 might give up.


 --
 Paul B. Henson  |  (909) 979-6361  |  
 http://www.csupomona.edu/~henson/http://www.csupomona.edu/%7Ehenson/
 Operating Systems and Network Analyst  |  hen...@csupomona.edu
 California State Polytechnic University  |  Pomona CA 91768
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Import zpool from FreeBSD in OpenSolaris

2010-02-23 Thread Thomas Burgess
When i needed to do this, the only way i could get it to work was to do
this:

Take some disks, use a Opensolaris Live CD and label them EFI
Create a ZPOOL in FreeBSD with these disks
copy my data from freebsd to the new zpool
export the pool
import the pool



On Tue, Feb 23, 2010 at 9:11 PM, patrik s...@dentarg.net wrote:

 I want to import my zpool's from FreeBSD 8.0 in OpenSolaris 2009.06.

 After reading the few posts (links below) I was able to find on the
 subject, it seems like it there is a differences between FreeBSD and
 Solaris. FreeBSD operates on directly on the disk and Solaris creates a
 partion and uses that... is that right? Is it impossible for OpenSolaris to
 use zpool's from FreeBSD?

 * http://opensolaris.org/jive/thread.jspa?messageID=445766
 * http://opensolaris.org/jive/thread.jspa?messageID=450755;
 * http://mail.opensolaris.org/pipermail/ug-nzosug/2009-June/27.html

 This is zpool import from my machine with OpenSolaris 2009.06 (all
 zpool's are fine in FreeBSD). Notice that the zpool named temp can be
 imported. Why not secure then? Is it because it is raidz1?

  pool: secure
id: 15384175022505637073
  state: UNAVAIL
 status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
 config:

secureUNAVAIL  insufficient replicas
  raidz1  UNAVAIL  insufficient replicas
c8t1d0p0  ONLINE
c8t2d0s2  ONLINE
c8t3d0s8  UNAVAIL  corrupted data
c8t4d0s8  UNAVAIL  corrupted data


  pool: temp
id: 10889808377251842082
  state: ONLINE
 status: The pool is formatted using an older on-disk version.
 action: The pool can be imported using its name or numeric identifier,
 though
some features will not be available without an explicit 'zpool
 upgrade'.
 config:

tempONLINE
  c8t0d0p0  ONLINE
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] File created with CIFS is not immediately deletable on local file system

2010-02-21 Thread Thomas Burgess
i may be wrong but i think it would depend on how you have your ACL's set up
and whether or not ACL inhereat is on
On Sun, Feb 21, 2010 at 5:46 AM, Peter Radig pe...@radig.de wrote:

 Box running osol_133 with smb/server enabled. I create a file on a Windows
 box that has a remote ZFS fs mounted. I go to the Solaris box and try to
 remove the file and get permission denied for up 30 sec. Than it works. A
 sync immediately before the rm seems to speed things up and rm is
 successful immediately afterwards, too. The permissions of the file don't
 change in between and look as follow:
 --+  1 petersysprog0 Feb 21 11:37 NEWFILE3.txt
 user:peter:rwxpdDaARWcCos:---:allow
   group:2147483648:rwxpdDaARWcCos:---:allow

 Is this a bug or a feature?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Idiots Guide to Running a NAS with ZFS/OpenSolaris

2010-02-18 Thread Thomas Burgess
On Thu, Feb 18, 2010 at 5:21 PM, Robert
rkaye+...@spamcop.netrkaye%2b...@spamcop.net
 wrote:

 At the risk of getting myself flamed with my very first post, will someone
 please point me to the 'Idiots Guide to Running a NAS with ZFS/OpenSolaris'?

  - - - sig - - -
 ...What I lack in knowledge I try to make up in witty humor.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

I don't think there is a guide like that but honestly, it's not that hard
for most of it.

It also really depends on what your hardware is...how you plan to set it up
and all but once you install it, getting CIFS and NFS working is dirt simple
(though the ACL's can be somehwhat more difficult, though i just create a
group nas and use something like this:

/usr/bin/chmod -R A=\
owner@:full_set:d:allow,\
owner@:full_set:f:allow,\
group:nas:full_set:d:allow,\
group:nas:full_set:f:allow,\
everyone@:rxaARWcs:d:allow,\
everyone@:raARWcs:f:allow \
/tank/nas/


which seems to work well.  I also use something like this:

/usr/bin/chmod -R A=\
owner@:full_set:d:allow,\
owner@:full_set:f:allow,\
user:wonslung:full_set:d:allow,\
user:wonslung:full_set:f:allow,\
everyone@:rxaARWcs:d:allow,\
everyone@:raARWcs:f:allow \
/tank/nas/Wonslung

but anyways, which part do you need help with.  Most of it is fairly straigh
forward for simple nas.

Installing is pretty simple.  creating a zpool is pretty simple, enabling
cifs is pretty simple, creating shares once you have it is pretty simple.
The ACL's were what gave me the most trouble at first which is why i made a
point to post them.  (being really new to solaris i kept trying to use the
wrong chmod and it didn't work)
but the main thing is: google is your friend.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Thomas Burgess
 Whatever.  Regardless of what you say, it does show:

 · Which is faster, raidz, or a stripe of mirrors?

 · How much does raidz2 hurt performance compared to raidz?

 · Which is faster, raidz, or hardware raid 5?

 · Is a mirror twice as fast as a single disk for reading?  Is a
 3-way mirror 3x faster?  And so on?



 I’ve seen and heard many people stating answers to these questions, and my
 results (not yet complete) already answer these questions, and demonstrate
 that all the previous assertions were partial truths.




I don't think he was complaining, i think he was sayign he dind't need you
to run iosnoop on the old version of ZFS.

Solaris 10 has a really old version of ZFS.  i know there are some pretty
big differences in zfs versions from my own non scientific benchmarks.  It
would make sense that people wouldn't be as interested in benchmarks of
solaris 10 ZFS seeing as there are literally hundreds scattered around the
internet.

I don't think he was telling you not to bother testing for your own purposes
though.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Thomas Burgess

 c7, c8 and c9 are LSI controllers using the MPT driver. The motherboard has
 6 SATA ports which are presented as two controllers (presumably c10 and
 c11)
 one for ports 0-3 and one for ports 4 and 5; both currently use the PCI-IDE
 drivers.


on my motherboard, i can make the onboard sata ports show up as IDE or SATA,
you may look into that.  It would probably be something like AHCI mode.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Thomas Burgess
oh, so i WAS right?


awesome

On Sun, Feb 14, 2010 at 10:45 PM, Dave Pooser dave@alfordmedia.comwrote:

  on my motherboard, i can make the onboard sata ports show up as IDE or
 SATA,
  you may look into that.  It would probably be something like AHCI mode.

 Yeah, I changed the motherboard setting from enhanced to AHCI and now
 those ports show up as SATA.
 --
 Dave Pooser, ACSA
 Manager of Information Services
 Alford Media  http://www.alfordmedia.com


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] available space

2010-02-13 Thread Thomas Burgess
one shows pool size, one shows filesystem size.


the pool size is based on raw space.

the zfs list size shows how much is used and how much usable space is
ableable.

for instance, i use raidz2 with 1tb drives so if i do zpool list i see ALL
the space, including parity, but if i do zfs list i only see how much space
the filesystem seems.

2 different tools for 2 different jobs.


On Sat, Feb 13, 2010 at 12:28 PM, Charles Hedrick hedr...@rutgers.eduwrote:

 I have the following pool:

 NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
 OIRT  6.31T  3.72T  2.59T58%  ONLINE  /

 zfs list shows the following for a typical file system:

 NAMEUSED  AVAIL  REFER  MOUNTPOINT
 OIRT/sakai/production  1.40T  1.77T  1.40T  /OIRT/sakai/production

 Why is available lower when shown by zfs than zpool?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-08 Thread Thomas Burgess
Just like i said way earlier,  The entire idea is like asking to buy a
Ferrari  without the aluminum wheels they sell because you think they are
charging too much for them, after all, aluminum is cheap.
It's just not done that way.  There are OTHER OPTIONS for people who can't
afford it.  You really can't have both.  You can either afford it or you
can't.

On Mon, Feb 8, 2010 at 8:36 PM, Kjetil Torgrim Homme kjeti...@linpro.nowrote:

 Daniel Carosone d...@geek.com.au writes:

  In that context, I haven't seen an answer, just a conclusion:
 
   - All else is not equal, so I give my money to some other hardware
 manufacturer, and get frustrated that Sun won't let me buy the
 parts I could use effectively and comfortably.

 no one is selling disk brackets without disks.  not Dell, not EMC, not
 NetApp, not IBM, not HP, not Fujitsu, ...

 --
 Kjetil T. Homme
 Redpill Linpro AS - Changing the game

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-08 Thread Thomas Burgess
On Mon, Feb 8, 2010 at 9:13 PM, Tim Cook t...@cook.ms wrote:

 On Monday, February 8, 2010, Kjetil Torgrim Homme kjeti...@linpro.no
 wrote:
  Daniel Carosone d...@geek.com.au writes:
 
  In that context, I haven't seen an answer, just a conclusion:
 
   - All else is not equal, so I give my money to some other hardware
 manufacturer, and get frustrated that Sun won't let me buy the
 parts I could use effectively and comfortably.
 
  no one is selling disk brackets without disks.  not Dell, not EMC, not
  NetApp, not IBM, not HP, not Fujitsu, ...
 
  --
  Kjetil T. Homme
  Redpill Linpro AS - Changing the game
 
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

 Although I am in full support of what sun is doing, to play devils
 advocate: supermicro is.



This is a far cry from an apples to apples comparison though.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-08 Thread Thomas Burgess
 On Mon, Feb 08, 2010 at 09:33:12PM -0500, Thomas Burgess wrote:
  This is a far cry from an apples to apples comparison though.

 As much as I'm no fan of Apple, it's a pity they dropped ZFS because
 that would have brought considerable attention to the opportunity of
 marketing and offering zfs-suitable hardware to the consumer arena.
 Port-multiplier boxes already seem to be targetted most at the Apple
 crowd, even it's only in hope of scoring a better margin.

 Otherwise, bad analogies, whether about cars or fruit, don't help.


It might help people to understand how ridiculous they sound going on and on
about buying a premium storage appliance without any storage.  I think the
car analogy was dead on.  You don't have to agree with a vendors practices
to understand them.  If you have a more fitting analogy, then by all means
lets hear it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   3   >