Re: [zfs-discuss] Cores vs. Speed?

2010-02-08 Thread Miles Nordin
 enh == Edward Ned Harvey sola...@nedharvey.com writes:

   enh As for mac access via nfs, automounter, etc ... I found that
   enh the UID/GID / posix permission bits were a problem, and I
   enh found it was easier and more reliable for the macs to use SMB

I found it much less reliable, if by reliable you mean not losing
data.  There's a questionable GUI feature that throws up a
[Disconnect] window whenever a normal unix system would say 'not
responding still trying', but so long as you ignore this window
instead of pressing what seems to be the only button, the old Unix
feature of ``server can reboot without losing client writes'' seems to
still be there.  SMB, not so much.

There's also questions of case sensitivity, locking, being mounted at
boot time rather than login time, accomodating more than one user.
I've also heard SMB is far slower.

The Macs I've switched to automounted NFS are causing me less trouble.

If you are in a ``share almost everything'' situation, just add

 umask 000

to /etc/launchd.conf and reboot.


pgpQnaWJ6VGUM.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-07 Thread Erik Trimble

Rob Logan wrote:
I like the original Phenom X3 or X4 



we all agree ram is the key to happiness. The debate is what offers the most ECC
ram for the least $. I failed to realize the AM3 cpus accepted UnBuffered ECC 
DDR3-1333
like Lynnfield. To use Intel's 6 slots vs AMD 4 slots, one must use Registered 
ECC.
So the low cost mission is something like

AMD Phenom II X4 955 Black Edition Deneb 3.2GHz Socket AM3 125W 
$150 http://www.newegg.com/Product/Product.aspx?Item=N82E16819103808  
$ 85 http://www.newegg.com/Product/Product.aspx?Item=N82E16813131609  
$ 60 http://www.newegg.com/Product/Product.aspx?Item=N82E16820139050


But we are still stuck at 8G without going to expensive ram or
a more expensive CPU
Socket AM2/AM2+ supports a maximum of 4 DIMM sockets (2 dual banks).  I 
/think/ AM3 has the same limitation, but I can't verify that.


As for Intel, I can only find a single 6-DIMM motherboard for LGA1156 
(the i3/i5/i7 socket, not the LGA1366 i7-only socket).  It's $250  
(http://www.newegg.com/Product/Product.aspx?Item=N82E16813128410)



Frankly, for more than 8GB, your best bet is to pick up an EOL'd 
motherboard -  a dual Socket F board which supports an AMD Barcelona-era 
Opteron is probably the best buy - $250 for both, give or take.  And, 
Registered ECC DDR2-667  runs less than $50 per 2GB stick.




In reality, what I've found is often the best bet is to get an older IBM 
system from eBay.  There are plenty of them around, they're pretty 
cheap, and parts are relatively inexpensive.   For instance, I just got 
a (new, still under warranty) IBM x3500 for about $500 - it's a tower 
case.  The 2U rackmount IBM x3655 is also a good fit here.  The sole 
drawback of these things is that they aren't exactly built to be 
super-low power. Oh well.  But they've got all sorts of nice bells and 
whistles.  :-)


(good news for me:  I got fully-tricked out x3500 with 2 dual Xeon 5140, 
8GB of RAM,  4x73 SAS and 4x750 SATA drives for under $1k. AND a 
battery-backed raid controller. AND a real Service Processor. AND that 
includes the 3-year IBM warranty.  And, it runs OpenSolaris 2009.06 with 
no tricks require - simply boot, install, and it's all set.)



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-07 Thread David Magda

On Feb 6, 2010, at 21:08, Rob Logan wrote:

I failed to realize the AM3 cpus accepted UnBuffered ECC DDR3-1333  
like Lynnfield. To use Intel's 6 slots vs AMD 4 slots, one must use  
Registered ECC.


What is the difference between unbuffered and registered?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-07 Thread Erik Trimble

Ian Collins wrote:

David Magda wrote:

On Feb 6, 2010, at 21:08, Rob Logan wrote:

I failed to realize the AM3 cpus accepted UnBuffered ECC DDR3-1333 
like Lynnfield. To use Intel's 6 slots vs AMD 4 slots, one must use 
Registered ECC.


What is the difference between unbuffered and registered?


About $5 ?

baddaboom

Thank you, thank you, I'll be here all week.


Buffered is often used as a synonym for Registered memory.
The addition of a register (buffer) between the memory controller and 
the memory chips reduces the bus load.  This increases the number of 
modules that can be driven.  This is why many systems specify 
unbuffered and (significantly higher) registered memory capacities.

Be careful.

Fully-Buffered and Registered and NOT the same. 


http://en.wikipedia.org/wiki/Registered_memory


It's (really) hard to design a system to use more than 4 DIMM slots with 
Unbuffered RAM and still keep everything stable.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Robert Milkowski

On 06/02/2010 02:38, Ross Walker wrote:

On Feb 5, 2010, at 10:49 AM, Robert Milkowski mi...@task.gda.pl wrote:


Actually, there is.
One difference is that when writing to a raid-z{1|2} pool compared to 
raid-10 pool you should get better throughput if at least 4 drives 
are used. Basically it is due to the fact that in RAID-10 the maximum 
you can get in terms of write throughput is a total aggregated 
throughput of half the number of used disks and only assuming there 
are no other bottlenecks between the OS and disks especially as you 
need to take into account that you are double the bandwidth 
requirements due to mirroring. In case of RAID-Zn you have some extra 
overhead for writing additional checksum but other than that you 
should get a write throughput closer to of T-N (where N is a RAID-Z 
level) instead of T/2 in RAID-10.


That hasn't been my experience with raidz. I get a max read and write 
IOPS of the slowest drive in the vdev.


Which makes sense because each write spans all drives and each read 
spans all drives (except the parity drives) so they end up having the 
performance characteristics of a single drive.





Please note that I was writing about write *throughput* in terms of MB/s 
instead of IOPS.
But even in terms of write IOPS RAID-Z can be faster than RAID-10 
assuming asynchronous I/O is issued and there is enough memory to buffer 
them for up-to 30s - well if it is the case from app point of view it 
will be as fast as writing to memory in both raid-z and raid-10 cases 
but because zfs will aggregate writes and make them basically a 
sequential writes raid-z could provide more throughput than raid-10 when 
you need to write your data twice, so raid-z could be able to more 
quickly flush transactions to disks.


See 
http://milek.blogspot.com/2006/04/software-raid-5-faster-tha_114588672235104990.html



--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Edward Ned Harvey
  b (4) Hold backups from windows machines, mac (time machine),
  b linux.
 
 for time machine you will probably find yourself using COMSTAR and the
 GlobalSAN iSCSI initiator because Time Machine does not seem willing
 to work over NFS.  Otherwise, for Macs you should definitely use NFS,
 and you should definitely use the automounter, and you should use it
 with the 'net' option (let Mac OS pick were tou mount the fs) if you
 have heirarchical mounts.

A few comments here ... True time machine won't work across NFS, at least,
not unless you enable the unsupported bit, but depending on the server OS
... FreeNAS for example ... uses ZFS as the underlying filesystem, and
easily allows you to create AFP shares which support time machine.

As for mac access via nfs, automounter, etc ... I found that the UID/GID /
posix permission bits were a problem, and I found it was easier and more
reliable for the macs to use SMB instead of NFS.  At least in my
environment.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Bob Friesenhahn

On Fri, 5 Feb 2010, Rob Logan wrote:


Intel's RAM is faster because it needs to be.

I'm confused how AMD's dual channel, two way interleaved
128-bit DDR2-667 into an on-cpu controller is faster than
Intel's Lynnfield dual channel, Rank and Channel interleaved
DDR3-1333 into an on-cpu controller.
http://www.anandtech.com/printarticle.aspx?i=3634


I see that you are reading a game computing web site.  It is for 
people who want to build PCs to run video games under Windows.  The 
most useful thing I see in the referenced article is that these new 
Intel Core i7 CPUs are able to idle at much lower power levels, which 
seems quite useful for a home NAS server.  Otherwise I don't see much 
which indicates what the performance would be with Solaris/zfs in a 
storage-setup.


The main focus should be on how much ECC RAM you can stuff into the 
motherboard and how much it costs.  After that comes multi-threaded 
memory I/O performance and power consumption.  Raw CPU computational 
performance should be way down in the priority level.  Even a fairly 
slow CPU should be able to saturate gigabit ethernet.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Erik Trimble

Bob Friesenhahn wrote:

On Fri, 5 Feb 2010, Rob Logan wrote:


Intel's RAM is faster because it needs to be.

I'm confused how AMD's dual channel, two way interleaved
128-bit DDR2-667 into an on-cpu controller is faster than
Intel's Lynnfield dual channel, Rank and Channel interleaved
DDR3-1333 into an on-cpu controller.
http://www.anandtech.com/printarticle.aspx?i=3634


I see that you are reading a game computing web site.  It is for 
people who want to build PCs to run video games under Windows.  The 
most useful thing I see in the referenced article is that these new 
Intel Core i7 CPUs are able to idle at much lower power levels, which 
seems quite useful for a home NAS server.  Otherwise I don't see much 
which indicates what the performance would be with Solaris/zfs in a 
storage-setup.


The main focus should be on how much ECC RAM you can stuff into the 
motherboard and how much it costs.  After that comes multi-threaded 
memory I/O performance and power consumption.  Raw CPU computational 
performance should be way down in the priority level.  Even a fairly 
slow CPU should be able to saturate gigabit ethernet.


Bob
I would second Bob's recommendations.  For a storage box, the primary 
things of important are having enough ECC RAM to cache everything. A big 
L2ARC SSD seems to be equally important for those using dedup regularly.


Also, be /very/ careful with buying non-Xeon Intel CPUs. With anything 
prior to the Nehalem architecture, the memory controller was on the 
motherboard, and you specifically have to get a motherboard which 
supports ECC Ram.  For the Nehalem and later architectures (Core i3, i5, 
i7), with the memory controller on the CPU, only SOME of them support 
ECC RAM.   I /strongly/ suggest looking at the CPU specs from Intel 
first, when getting any non-Xeon CPU or motherboard:


http://ark.intel.com/Default.aspx

AMD, of course, does not have this problem. ALL x64 AMD CPUs sold these 
days support ECC.  And, it seems that finding an AMD motherboard which has


Frankly, I suspect that a small storage box pumping data out a single 
1Gbit ethernet interfaces really doesn't stress a CPU that much, in the 
big scheme of things.  I like the original Phenom X3 or X4 as a good 
compromise between modest L2 cache, modest power draw, good multi-core, 
and really cheap price.




If you really want something hard-core, I'd step over into the older AMD 
Barcelona-based Opterons. They're equivalent to the Phenom, plus their 
motherboards come with just stupid numbers of DIMM slots.


:-)

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Rob Logan
 I like the original Phenom X3 or X4 

we all agree ram is the key to happiness. The debate is what offers the most ECC
ram for the least $. I failed to realize the AM3 cpus accepted UnBuffered ECC 
DDR3-1333
like Lynnfield. To use Intel's 6 slots vs AMD 4 slots, one must use Registered 
ECC.
So the low cost mission is something like

AMD Phenom II X4 955 Black Edition Deneb 3.2GHz Socket AM3 125W 
$150 http://www.newegg.com/Product/Product.aspx?Item=N82E16819103808  
$ 85 http://www.newegg.com/Product/Product.aspx?Item=N82E16813131609  
$ 60 http://www.newegg.com/Product/Product.aspx?Item=N82E16820139050

But we are still stuck at 8G without going to expensive ram or
a more expensive CPU.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Marty Scholes
 Was my raidz2 performance comment above correct?
  That the write speed is that of the slowest disk?
  That is what I believe I have
 read.

 You are
 sort-of-correct that its the write speed of the
 slowest disk.

My experience is not in line with that statement.  RAIDZ will write a complete 
stripe plus parity (RAIDZ2 - two parities, etc.).  The write speed of the 
entire stripe will be brought down to that of the slowest disk, but only for 
its portion of the stripe.  In the case of a 5 spindle RAIDZ2, 1/3 of the 
stripe will be written to each of three disks and parity info on the other two 
disks.  The throughput would be 3x the slowest disk for read or write.

 Mirrored drives will be faster, especially for
 random I/O. But you sacrifice storage for that
 performance boost.

Is that really true?  Even after glancing at the code, I don't know if zfs 
overlaps mirror reads across devices.  Watching my rpool mirror leads me to 
believe that it does not.  If true, then mirror reads would be no faster than a 
single disk.  Mirror writes are no faster than the slowest disk.

As a somewhat related rant, there seems to be confusion about mirror IOPS vs. 
RAIDZ[123] IOPS.  Assuming mirror reads are not overlapped, then a mirror vdev 
will read and write at roughly the same throughput and IOPS as a single disk 
(ignoring bus and cpu constraints).

Also ignoring bus and cpu constraints, a RAIDZ[123] vdev will read and write at 
roughly the same throughput of a single disk, multiplied by the number of data 
drives: three in the config being discussed.  Also, a RAIDZ[123] vdev will have 
IOPS performance similar to that of a single disk.

A stack of mirror vdevs will, of course, perform much better than a single 
mirror vdev in terms of throughput and IOPS.

A stack of RAIDZ[123] vdevs will also perform much better than a single 
RAIDZ[123] vdev in terms of throughput and IOPS.

RAIDZ tends to have more CPU overhead and provides more flexibility in choosing 
the optimal data to redundancy ratio.

Many read IOPS problems can be mitigated by L2ARC, even a set of small, fast 
disk drives.  Many write IOPS problems can be mitigated by ZIL.

My anecdotal conclusions backed by zero science,
Marty
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Robert Milkowski

On 05/02/2010 04:11, Edward Ned Harvey wrote:

Data in raidz2 is striped so that it is split across multiple disks.
 

Partial truth.
Yes, the data is on more than one disk, but it's a parity hash, requiring
computation overhead and a write operation on each and every disk.  It's not
simply striped.  Whenever you read or write, you need to access all the
disks (or a bunch of 'em) and use compute cycles to generate the actual data
stream.  I don't know enough about the underlying methods of calculating and
distributing everything to say intelligently *why*, but I know this:

   


Well, that's not entirely true. When reading from raidz2 (non-degraded) 
you don't need to re-compute any hashes except for a standard fs block 
checksum which zfs checks regardless of underlying redundancy.




In this (sequential) sense it is faster than a single disk.
 

Whenever I benchmark raid5 versus a mirror, the mirror is always faster.
Noticeably and measurably faster, as in 50% to 4x faster.  (50% for a single
disk mirror versus a 6-disk raid5, and 4x faster for a stripe of mirrors, 6
disks with the capacity of 3, versus a 6-disk raid5.)  Granted, I'm talking
about raid5 and not raidz.  There is possibly a difference there, but I
don't think so.

   

Actually, there is.
One difference is that when writing to a raid-z{1|2} pool compared to 
raid-10 pool you should get better throughput if at least 4 drives are 
used. Basically it is due to the fact that in RAID-10 the maximum you 
can get in terms of write throughput is a total aggregated throughput of 
half the number of used disks and only assuming there are no other 
bottlenecks between the OS and disks especially as you need to take into 
account that you are double the bandwidth requirements due to mirroring. 
In case of RAID-Zn you have some extra overhead for writing additional 
checksum but other than that you should get a write throughput closer to 
of T-N (where N is a RAID-Z level) instead of T/2 in RAID-10.


See 
http://milek.blogspot.com/2006/04/software-raid-5-faster-tha_114588672235104990.html



--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Bob Friesenhahn

On Fri, 5 Feb 2010, Rob Logan wrote:


well, lets look at Intel's offerings... Ram is faster than AMD's
at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECC


Intel's RAM is faster because it needs to be.  It is wise to see the 
role that architecture plays in total performance.



Now, this gets one to 8G ECC easily...AMD's unfair advantage is all those
ram slots on their multi-die MBs... A slow AMD cpu with 64G ram
might be better depending on your working set / dedup requirements.


With the AMD CPU, the memory will run cooler and be cheaper. 
Regardless, for zfs, memory is more important than raw CPU 
performance.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Rob Logan


 if zfs overlaps mirror reads across devices.

it does... I have one very old disk in this mirror and
when I attach another element one can see more reads going
to the faster disks... this past isn't right after the attach
but since the reboot, but one can still see the reads are
load balanced depending on the response of elements
in the vdev.

13 % zpool iostat -v
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
rpool   7.01G   142G  0  0  1.60K  1.44K
  mirror7.01G   142G  0  0  1.60K  1.44K
c9t1d0s0  -  -  0  0674  1.46K
c9t2d0s0  -  -  0  0687  1.46K
c9t3d0s0  -  -  0  0720  1.46K
c9t4d0s0  -  -  0  0750  1.46K


but I also support your conclusions.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Miles Nordin
 b == Brian  broco...@vt.edu writes:

 b (4) Hold backups from windows machines, mac (time machine),
 b linux.

for time machine you will probably find yourself using COMSTAR and the
GlobalSAN iSCSI initiator because Time Machine does not seem willing
to work over NFS.  Otherwise, for Macs you should definitely use NFS,
and you should definitely use the automounter, and you should use it
with the 'net' option (let Mac OS pick were tou mount the fs) if you
have heirarchical mounts.

Anyway for time machine you cannot use NFS.  I'm using:

 * snv_130
 * globalSAN_4.0.0.197_BETA-20091110
 * Mac OS X 10.5.latest

and it seems to basically work for the last ~1month.  I've no reason
to believe these versions are special but suggest you get the BETA
globalsan and not the stable one.

for linux, if you mount Linux NFS filesystems from Solaris you need to
use '-o sec=sys' to avoid everything showing up as guest, due to a
weird corner case that I think eventually got fixed on one side or the
other but probably hasn't percolated through all the stable branches
yet.

If you mount Solaris NFS filesystems from Linux, you may want to use
'-o noacl' because Solaris NFS fabricates ACL's and feeds them to
Linux even when you haven't made any, leading to annoying '+' signs in
'ls -l' and sometimes weird, unnecessary permissions problems.  This
happens even with NFSv3. :( What's even stupider, busybox 'mount'
doesn't seem to support the noacl flag which cost me an extra couple
hours getting an NFS-rooted system to boot.  I like the idea of
smoothly transitioning to a more advanced permissions system, but IMHO
the whole mess just goes to show you, let people who've been mucking
about with Windows touch anything else in your codebase, and their
brains are so warped by the influence of that platform on their
thinking they make a ponderous mess of it and then chant ``this
shouldn't be happening'' over and over.

 b (5) Be an iSCSI target for several different Virtual Boxes.

I've been using plain statically-allocated (not dynamic) .VDI's on ZFS
filesystems.  I've not been using zvol's nor any iSCSI yet.  If you do
the latter two suggest comparing performance with the former
one---there are rumors of some cache flush knobs may need tuning.

Also in general when you yank the cord, the integrity of a physical
machine's filesystems is guaranteed, but the same is *not* true of a
virtual machine when its host's cord is yanked.  It's supposed to be
true when you force-virtual-powerdown the guest, but not when you yank
the host's cord, because of the same knobs were twisted to compromise
integrity for performance.  The compromise is probably the right one
provided you can work around it, by for example snapshotting the guest
so yuo can roll back if there's corruption, and keeping oft-changing
files that can't be rolled back outside the guest using either guest
serevices shared folders on Windows or NFS on Unix.

 b Function 4 will use compression and deduplication.  Function 5
 b will use deduplication.

I've not dared to use dedup yet.  In particular the DDT needs to fit
in RAM (or maybe L2ARC) to avoid performance degredations so severe
you may find yourself painted into a corner (ex., 'zfs delete' runs
1wk forcing you to give up, 'zfs send' non-deduped filesystems
elsewhere, destroy pool, restore from backup).  not sure sddt-vdev is
the best idea but that's discussed here:

 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913566

What's missing to my view is a way to manage it: if overgrown DDT can,
in effect, trash the pool by making maintenance commands take forever,
then there's got to be a way to watch the size of the DDT, maybe even
cap it and disable dedup if it overgrows.  That said I haven't tried
it so I'm talking out my ass.

Also gzip compression does not sound like it works well---suggest lzjb
instead---but this might be fixed in 6586537, 6806882, or by this fix
which sounds like a fairly big deal:

 http://arc.opensolaris.org/caselog/PSARC/2009/615/mail

so I would say gzip may be worth another try now but definitely be
ready to fall back to lzjb and convert with zfs send | zfs recv.

anyway...seems many things are really improving drastically since a
year ago, and thank god for the list!


pgpCAgC0MCdCz.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Brandon High
On Fri, Feb 5, 2010 at 12:20 PM, Miles Nordin car...@ivy.net wrote:
 for time machine you will probably find yourself using COMSTAR and the
 GlobalSAN iSCSI initiator because Time Machine does not seem willing
 to work over NFS.  Otherwise, for Macs you should definitely use NFS,

Slightly off-topic ...

You can make Time Machine work with CIFS or NFS mounts by setting a
system preference.

The command is:
defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

I've had some success trying to get my father-in-law's system to back
up to a drobo with this. It was working last time I was by his house,
but I'm not sure if it's still working.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Ross Walker

On Feb 5, 2010, at 10:49 AM, Robert Milkowski mi...@task.gda.pl wrote:


Actually, there is.
One difference is that when writing to a raid-z{1|2} pool compared  
to raid-10 pool you should get better throughput if at least 4  
drives are used. Basically it is due to the fact that in RAID-10 the  
maximum you can get in terms of write throughput is a total  
aggregated throughput of half the number of used disks and only  
assuming there are no other bottlenecks between the OS and disks  
especially as you need to take into account that you are double the  
bandwidth requirements due to mirroring. In case of RAID-Zn you have  
some extra overhead for writing additional checksum but other than  
that you should get a write throughput closer to of T-N (where N is  
a RAID-Z level) instead of T/2 in RAID-10.


That hasn't been my experience with raidz. I get a max read and write  
IOPS of the slowest drive in the vdev.


Which makes sense because each write spans all drives and each read  
spans all drives (except the parity drives) so they end up having the  
performance characteristics of a single drive.


Now if you have enough drives you can create multiple raidz vdevs to  
get the IOPS up, but you need a lot more drives then what multiple  
mirror vdevs can provide IOPS wise with the same amount of spindles.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Rob Logan

 Intel's RAM is faster because it needs to be.
I'm confused how AMD's dual channel, two way interleaved 
128-bit DDR2-667 into an on-cpu controller is faster than
Intel's Lynnfield dual channel, Rank and Channel interleaved 
DDR3-1333 into an on-cpu controller. 
http://www.anandtech.com/printarticle.aspx?i=3634

 With the AMD CPU, the memory will run cooler and be cheaper. 
cooler yes, but only $2 more per gig for 2x bandwidth?

http://www.newegg.com/Product/Product.aspx?Item=N82E16820139050
http://www.newegg.com/Product/Product.aspx?Item=N82E16820134652

and if one uses all 16 slots, that 667Mhz simm runs at 533Mhz
with AMD. The same is true for Lynnfield if one uses Registered
DDR3, one only gets 800Mhz with all 6 slots. (single or dual rank)

 Regardless, for zfs, memory is more important than raw CPU 
agreed! but everything must be balanced.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Brian
I am Starting to put together a home NAS server that will have the following 
roles:

(1) Store TV recordings from SageTV over either iSCSI or CIFS.  Up to 4 or 5 HD 
streams at a time.  These will be streamed live to the NAS box during recording.
(2) Playback TV (could be stream being recorded, could be others) to 3 or more 
extenders
(3) Hold a music repository
(4) Hold backups from windows machines, mac (time machine), linux.
(5) Be an iSCSI target for several different Virtual Boxes.

Function 4 will use compression and deduplication.
Function 5 will use deduplication.

I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2 mirrored 
boot drives.  

I have been reading these forums off and on for about 6 months trying to figure 
out how to best piece together this system.

I am first trying to select the CPU.  I am leaning towards AMD because of ECC 
support and power consumption.

For items such as de-dupliciation, compression, checksums etc.  Is it better to 
get a faster clock speed or should I consider more cores?  I know certain 
functions such as compression may run on multiple cores.

I have so far narrowed it down to:

AMD Phenom II X2 550 Black Edition Callisto 3.1GHz
and
AMD Phenom X4 9150e Agena 1.8GHz Socket AM2+ 65W Quad-Core

As they are roughly the same price.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Glenn Lagasse
* Brian (broco...@vt.edu) wrote:
 I am Starting to put together a home NAS server that will have the
 following roles:
 
 (1) Store TV recordings from SageTV over either iSCSI or CIFS.  Up to
 4 or 5 HD streams at a time.  These will be streamed live to the NAS
 box during recording.  (2) Playback TV (could be stream being
 recorded, could be others) to 3 or more extenders (3) Hold a music
 repository (4) Hold backups from windows machines, mac (time machine),
 linux.  (5) Be an iSCSI target for several different Virtual Boxes.
 
 Function 4 will use compression and deduplication.  Function 5 will
 use deduplication.
 
 I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2
 mirrored boot drives.  
 
 I have been reading these forums off and on for about 6 months trying
 to figure out how to best piece together this system.
 
 I am first trying to select the CPU.  I am leaning towards AMD because
 of ECC support and power consumption.

I can't comment on most of your question, but I will point you at:

http://blogs.sun.com/mhaywood/entry/powernow_for_solaris

I *think* the cpu's you're looking at won't be an issue but just something
to be aware of when looking at AMD kit (especially if you want to manage
the processor speed).

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Marc Nicholas
I would go with cores (threads) rather than clock speed here. My home system
is a 4-core AMD @ 1.8Ghz and performs well.

I wouldn't use drives that big and you should be aware of the overheads of
RaidZ[x].

-marc



On Thu, Feb 4, 2010 at 6:19 PM, Brian broco...@vt.edu wrote:

 I am Starting to put together a home NAS server that will have the
 following roles:

 (1) Store TV recordings from SageTV over either iSCSI or CIFS.  Up to 4 or
 5 HD streams at a time.  These will be streamed live to the NAS box during
 recording.
 (2) Playback TV (could be stream being recorded, could be others) to 3 or
 more extenders
 (3) Hold a music repository
 (4) Hold backups from windows machines, mac (time machine), linux.
 (5) Be an iSCSI target for several different Virtual Boxes.

 Function 4 will use compression and deduplication.
 Function 5 will use deduplication.

 I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2
 mirrored boot drives.

 I have been reading these forums off and on for about 6 months trying to
 figure out how to best piece together this system.

 I am first trying to select the CPU.  I am leaning towards AMD because of
 ECC support and power consumption.

 For items such as de-dupliciation, compression, checksums etc.  Is it
 better to get a faster clock speed or should I consider more cores?  I know
 certain functions such as compression may run on multiple cores.

 I have so far narrowed it down to:

 AMD Phenom II X2 550 Black Edition Callisto 3.1GHz
 and
 AMD Phenom X4 9150e Agena 1.8GHz Socket AM2+ 65W Quad-Core

 As they are roughly the same price.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Brian
Thanks for the reply.

Are cores better because of the compression/deduplication being mult-threaded 
or because of multiple streams?  It is a pretty big difference in clock speed - 
so curious as to why core would be better.  Glad to see your 4 core system is 
working well for you - so seems like I won't really have a bad choice.

Why avoid large drives?  Reliability reasons?  My main thought on that is that 
there is a 3 year warranty and I am building raidz2 because I expect failure.  
Or are there other reasons to avoid large drives?

I thought I understood the overhead..  The write and read speeds should be 
roughly that of the slowest disk? 

Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Richard Elling
Put your money into RAM, especially for dedup.
 -- richard

On Feb 4, 2010, at 3:19 PM, Brian wrote:

 I am Starting to put together a home NAS server that will have the following 
 roles:
 
 (1) Store TV recordings from SageTV over either iSCSI or CIFS.  Up to 4 or 5 
 HD streams at a time.  These will be streamed live to the NAS box during 
 recording.
 (2) Playback TV (could be stream being recorded, could be others) to 3 or 
 more extenders
 (3) Hold a music repository
 (4) Hold backups from windows machines, mac (time machine), linux.
 (5) Be an iSCSI target for several different Virtual Boxes.
 
 Function 4 will use compression and deduplication.
 Function 5 will use deduplication.
 
 I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2 mirrored 
 boot drives.  
 
 I have been reading these forums off and on for about 6 months trying to 
 figure out how to best piece together this system.
 
 I am first trying to select the CPU.  I am leaning towards AMD because of ECC 
 support and power consumption.
 
 For items such as de-dupliciation, compression, checksums etc.  Is it better 
 to get a faster clock speed or should I consider more cores?  I know certain 
 functions such as compression may run on multiple cores.
 
 I have so far narrowed it down to:
 
 AMD Phenom II X2 550 Black Edition Callisto 3.1GHz
 and
 AMD Phenom X4 9150e Agena 1.8GHz Socket AM2+ 65W Quad-Core
 
 As they are roughly the same price.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Arnaud Brand




Le 05/02/10 01:00, Brian a crit:

  Thanks for the reply.

Are cores better because of the compression/deduplication being mult-threaded or because of multiple streams?  It is a pretty big difference in clock speed - so curious as to why core would be better.  Glad to see your 4 core system is working well for you - so seems like I won't really have a bad choice.

Why avoid large drives?  Reliability reasons?  My main thought on that is that there is a 3 year warranty and I am building raidz2 because I expect failure.  Or are there other reasons to avoid large drives?

I thought I understood the overhead..  The write and read speeds should be roughly that of the slowest disk? 

Thanks.
  

From what I saw, ZFS scales terribly well with
multiple cores. 
If you want to send/receive your filesystems through ssh to another
machine, speed matters since ssh only uses one core (but then you can
always use netcat).
On Xeon E5520 running at 2.27 GHz we achieve around 70/80 MB/s ssh
throughput.

For dedup, you want lots of RAM and if possible a large and fast ssd
for L2ARC.
Someone on this list was asking about estimates on ram/cache needs
based on blocksizes / fs size / estimated dedup ratio.
Either I missed the answer or there was no really simple answer (other
than more is better, which always stays true for ram and l2arc).
Anyway, we tested it and were surprised about the quantity of reads
that ensue.

Arnaud



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Cindy Swearingen

Hi Brian,

If you are considering testing dedup, particularly on large datasets, 
see the list of known issues, here:


http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup

Start with build 132.

Thanks,

Cindy


On 02/04/10 16:19, Brian wrote:

I am Starting to put together a home NAS server that will have the following 
roles:

(1) Store TV recordings from SageTV over either iSCSI or CIFS.  Up to 4 or 5 HD 
streams at a time.  These will be streamed live to the NAS box during recording.
(2) Playback TV (could be stream being recorded, could be others) to 3 or more 
extenders
(3) Hold a music repository
(4) Hold backups from windows machines, mac (time machine), linux.
(5) Be an iSCSI target for several different Virtual Boxes.

Function 4 will use compression and deduplication.
Function 5 will use deduplication.

I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2 mirrored boot drives.  


I have been reading these forums off and on for about 6 months trying to figure 
out how to best piece together this system.

I am first trying to select the CPU.  I am leaning towards AMD because of ECC 
support and power consumption.

For items such as de-dupliciation, compression, checksums etc.  Is it better to 
get a faster clock speed or should I consider more cores?  I know certain 
functions such as compression may run on multiple cores.

I have so far narrowed it down to:

AMD Phenom II X2 550 Black Edition Callisto 3.1GHz
and
AMD Phenom X4 9150e Agena 1.8GHz Socket AM2+ 65W Quad-Core

As they are roughly the same price.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Brian
It sounds like the consensus is more cores over clock speed.  Surprising to me 
since the difference in clocks speed was over 1Ghz.  So, I will go with a quad 
core.

I was leaning towards 4GB of ram - which hopefully should be enough for dedup 
as I am only planning on dedupping my smaller file systems (backups and VMs).

Was my raidz2 performance comment above correct?  That the write speed is that 
of the slowest disk?  That is what I believe I have read.

Now on to the hard part of picking a motherboard that is supported and has 
enough SATA ports!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Marc Nicholas
On Thu, Feb 4, 2010 at 7:54 PM, Brian broco...@vt.edu wrote:

 It sounds like the consensus is more cores over clock speed.  Surprising to
 me since the difference in clocks speed was over 1Ghz.  So, I will go with a
 quad core.


Four cores @ 1.8Ghz = 7.2Ghz of threaded performance ([Open]Solaris is
relatively decent in terms of threading).

Two cores @ 3.1Ghz = 6.2Ghz

:)

Although you may find single threaded operations slower, as someone pointed
out, but even those might wash out as sometimes its I/O that's the problem.

I was leaning towards 4GB of ram - which hopefully should be enough for
 dedup as I am only planning on dedupping my smaller file systems (backups
 and VMs)


4GB is a good start.


 Was my raidz2 performance comment above correct?  That the write speed is
 that of the slowest disk?  That is what I believe I have read.


You are sort-of-correct that its the write speed of the slowest disk.

Mirrored drives will be faster, especially for random I/O. But you sacrifice
storage for that performance boost. That said, I have a similar setup as far
as number of spindles and can push 200MB/sec+ through it and saturate GigE
for iSCSI so maybe I'm being harsh on raidz2 :)


 Now on to the hard part of picking a motherboard that is supported and has
 enough SATA ports!


I used an ASUS board (M4A785-M) which has six (6) SATA2 ports onboard and
pretty decent Hypertransport throughput.

Hope that helps.

-marc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Edward Ned Harvey
 I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2
 mirrored boot drives.

You want to use compression and deduplication and raidz2.  I hope you didn't
want to get any performance out of this system, because all of those are
compute or IO intensive.

FWIW ... 5 disks in raidz2 will have capacity of 3 disks.  But if you bought
6 disks in mirrored configuration, you have a small extra cost, and much
better performance.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Brian
Interesting comments..

But I am confused.

Performance for my backups (compression/deduplication) would most likely not be 
#1 priority.

I want my VMs to run fast - so is it deduplication that really slows things 
down?

Are you saying raidz2 would overwhelm current I/O controllers to where I could 
not saturate 1 GB network link?

Is the CPU I am looking at not capable of doing dedup and compression?  Or are 
no CPUs capable of doing that currently?  If I only enable it for the backup 
filesystem will all my filesystems suffer performance wise?

Where are the bottlenecks in a raidz2 system that I will only access over a 
single gigabit link?  Are the insurmountable?



  I plan to start with 5 1.5 TB drives in a raidz2
 configuration and 2
  mirrored boot drives.
 
 You want to use compression and deduplication and
 raidz2.  I hope you didn't
 want to get any performance out of this system,
 because all of those are
 compute or IO intensive.
 
 FWIW ... 5 disks in raidz2 will have capacity of 3
 disks.  But if you bought
 6 disks in mirrored configuration, you have a small
 extra cost, and much
 better performance.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Bob Friesenhahn

On Thu, 4 Feb 2010, Brian wrote:

Was my raidz2 performance comment above correct?  That the write 
speed is that of the slowest disk?  That is what I believe I have 
read.


Data in raidz2 is striped so that it is split across multiple disks. 
In this (sequential) sense it is faster than a single disk.  For 
random access, the stripe performance can not be faster than the 
slowest disk though.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Edward Ned Harvey
 Data in raidz2 is striped so that it is split across multiple disks.

Partial truth.
Yes, the data is on more than one disk, but it's a parity hash, requiring
computation overhead and a write operation on each and every disk.  It's not
simply striped.  Whenever you read or write, you need to access all the
disks (or a bunch of 'em) and use compute cycles to generate the actual data
stream.  I don't know enough about the underlying methods of calculating and
distributing everything to say intelligently *why*, but I know this:

 In this (sequential) sense it is faster than a single disk.  

Whenever I benchmark raid5 versus a mirror, the mirror is always faster.
Noticeably and measurably faster, as in 50% to 4x faster.  (50% for a single
disk mirror versus a 6-disk raid5, and 4x faster for a stripe of mirrors, 6
disks with the capacity of 3, versus a 6-disk raid5.)  Granted, I'm talking
about raid5 and not raidz.  There is possibly a difference there, but I
don't think so.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Edward Ned Harvey
 I want my VMs to run fast - so is it deduplication that really slows
 things down?
 
 Are you saying raidz2 would overwhelm current I/O controllers to where
 I could not saturate 1 GB network link?
 
 Is the CPU I am looking at not capable of doing dedup and compression?
 Or are no CPUs capable of doing that currently?  If I only enable it
 for the backup filesystem will all my filesystems suffer performance
 wise?
 
 Where are the bottlenecks in a raidz2 system that I will only access
 over a single gigabit link?  Are the insurmountable?

I'm not sure if anybody can answer your questions.  I will suggest you just
try things out, and see for yourself.  Everybody would have different
techniques to tweak performance...

If you want to use fast compression and dedup, lots of cpu and ram.  (You
said 4G, but I don't think that's a lot.  I never buy a laptop with less
than 4G nowadays.  I think a lot of ram is 16G and higher.)

As for raidz2, and Ethernet ... I don't know.  If you've got 5 disks in a
raidz2 configuration ... Assuming each disk can sustain 500Mbits, then
theoretically these disks might be able to achieve 1.5Gbit or 2.5Gbit with
perfect efficiency ... So maybe they can max out your Ethernet.  I don't
know.  But I do know, if you had a stripe of 3 mirrors, they would have
absolutely no trouble maxing out the Ethernet.  Even a single mirror could
just barely do that.  For 2 or more mirrors, it's cake.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Erik Trimble

Brian wrote:

Interesting comments..

But I am confused.

Performance for my backups (compression/deduplication) would most likely not be 
#1 priority.

I want my VMs to run fast - so is it deduplication that really slows things 
down?
  
Dedup requires a fair amount of CPU, but it really wants a big L2ARC and 
RAM.  I'd seriously consider no less than 8GB of RAM, and look at 
getting a smaller-sized (~40GB) SSD, something on the order of an Intel 
X25-M.


Also, iSCSI-served VMs tend to do mostly random I/O, which is better 
handled by a striped mirror than RaidZ. 


Are you saying raidz2 would overwhelm current I/O controllers to where I could 
not saturate 1 GB network link?
  

No.


Is the CPU I am looking at not capable of doing dedup and compression?  Or are 
no CPUs capable of doing that currently?  If I only enable it for the backup 
filesystem will all my filesystems suffer performance wise?
  
All the CPUs you indicate can handle the job, it's a matter of getting 
enough data to them.



Where are the bottlenecks in a raidz2 system that I will only access over a 
single gigabit link?  Are the insurmountable?
  
RaidZ is good for streaming writes of large size, where you should get 
performance roughly equal to the number of data drives.  Likewise, for 
streaming reads.  Small writes generally limit performance to a level of 
about 1 disk, regardless of the number of data drives in the RaidZ. 
Small reads are in-between in terms of performance.



Personally, I'd look into having 2 different zpools - a striped mirror 
for your iSCSI-shared VMs, and a raidz2 for your main storage. 

In any case, for dedup, you really should have an SSD for L2ARC, if at 
all possible.  Being able to store all the metadata for the entire zpool 
in the L2ARC really, really helps speed up dedup.



Also, about your CPU choices, look here for a good summary of the 
current AMD processor features:


http://en.wikipedia.org/wiki/List_of_AMD_Phenom_microprocessors

(this covers the Phenom, Phenom II, and Athlon II families).


The main difference between the various models comes down to amount of 
L3 cache, and HT speed.  I'd be interested in doing some benchmarking to 
see exactly how the variations make a difference.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/05/2010 03:21 AM, Edward Ned Harvey wrote:
 FWIW ... 5 disks in raidz2 will have capacity of 3 disks.  But if you bought
 6 disks in mirrored configuration, you have a small extra cost, and much
 better performance.

But the raidz2 can survive the lost of ANY two disk, while the 6 disk
mirror configuration will be destroyed if the two disks lost are from
the SAME pair.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS2ukAZlgi5GaxT1NAQKD6wQAjI7zTFGmsHKtrhfSGS65edDecxwG8MSV
rDsxoDD0OFs5A1rAJBKZ0UWcRrrDt8iTUKyM0W13+3D2S3i6pxaMLU5jCLFEIPJ7
ZukQxUQ3eRLksXNCjsc7IlIyoe3GTwNclV8pymYCkHp+jggHASRyRtVnninDDX+g
zs1X2Rd4qwU=
=qzs+
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Rob Logan

  I am leaning towards AMD because of ECC support 

well, lets look at Intel's offerings... Ram is faster than AMD's
at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECC 
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139040

This MB has two Intel ethernets and for an extra $30 an ether KVM (LOM)
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182212

One needs a Xeon 34xx for ECC, the 45W versions isn't on newegg, and ignoring
the one without Hyper-Threading leaves us 
http://www.newegg.com/Product/Product.aspx?Item=N82E16819117225

Yea @ 95W it isn't exactly low power, but 4 cores @ 2533MHz and another
4 Hyper-Thread cores is nice.. If you only need one core, the marketing
paperwork claims it will push to 2.93GHz too. But the ram bandwidth is the 
big win for Intel. 

Avoid the temptation, but @ 2.8Ghz without ECC, this close $$
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115214

Now, this gets one to 8G ECC easily...AMD's unfair advantage is all those
ram slots on their multi-die MBs... A slow AMD cpu with 64G ram
might be better depending on your working set / dedup requirements.

Rob



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss