Re: [zfs-discuss] HP Proliant DL360 G7

2013-02-01 Thread Anh Quach
It's been about a year now and I figured I'd send out a little update for 
anyone curious. 

For density and cost considerations, we ended up going with a custom built 
Supermicro double sided storage chassis, which can hold 72 drives in 4U. We 
initially deployed 10 raidz2 vdevs for about 40TB usable - the remaining 12 
drive bays were used for cache, zil, spares. We have been very happy with this 
set up and are about to double the capacity with the JBOD version of the same 
Supermicro chassis. 

I went with 3 LSI 9211-8i controllers to match the Supermicro's 3 backplanes. 

In the testing phase, we evaluated Solaris 11 and Nexenta - ultimately we went 
with Nexenta due to the set of tools pre-written, although performance was 
slightly better on Solaris. 

-Anh


On Jul 3, 2012, at 1:49 PM, Anh Quach a...@blackandcode.com wrote:

 Thanks Richard and Edmund for the input. 
 
 Looking at the DL380 now... 
 
 -Anh
 
 
 On Jul 2, 2012, at 7:57 PM, Richard Elling wrote:
 
 On Jul 2, 2012, at 2:40 PM, Edmund White wrote:
 
 This depends upon what you want to do. I've used G6 and G7 ProLiants
 extensively in ZFS deployments (Nexenta, mostly). I'm assuming you'd be
 using an external JBOD enclosure?
 
 When I was at Nexenta, we qualed the DL380 G7, D2600, and D2700.
 These are some of the better boxes on the market.
 
 All works well. I disable the onboard Smart Array P410 RAID controller and
 replace it with an LSI SAS HBA. If using internal disks, I'll use the
 9211-8i. If external, the 9205-8e. Or sometimes, both.
 
 FYI, HP also sells an 8-port IT-style HBA (SC-08Ge), but it is hard to 
 locate 
 with their configurators. There might be a more modern equivalent cleverly
 hidden somewhere difficult to find.
  -- richard
 
 --
 ZFS Performance and Training
 richard.ell...@richardelling.com
 +1-760-896-4422
 
 
 
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-02-01 Thread Joerg Schilling
casper@oracle.com wrote:

 It gets even better.  Executables become part of the swap space via
 mmap, so that if you have a lot of copies of the same process running in
 memory, the executable bits don't waste any more space (well, unless you
 use the sticky bit, although that might be deprecated, or if you copy
 the binary elsewhere.)  There's lots of awesome fun optimizations in
 UNIX. :)

 The sticky bit has never been used in  that form of SunOS for as long
 as I remember (SunOS 3.x) and probably before that.  It no longer makes 
 sense in demand-paged executables.

SunOS-3.0 introduced NFS-root and swap on NFS. For that reason, the meaning of 
the sticky bit was changed to mean do not cache write this file.

Note that SunOS-3.0 appeared with the new Sun3 machines (first build on 
24.12.1985).

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss