Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Frank Cusack
You can just do fdisk to create a single large partition.  The attached
mirror doesn't have to be the same size as the first component.

On Thu, Dec 15, 2011 at 11:27 PM, Gregg Wonderly gregg...@gmail.com wrote:

 Cindy, will it ever be possible to just have attach mirror the surfaces,
 including the partition tables?  I spent an hour today trying to get a new
 mirror on my root pool.  There was a 250GB disk that failed.  I only had a
 1.5TB handy as a replacement.  prtvtoc ... | fmthard does not work in this
 case and so you have to do the partitioning by hand, which is just silly to
 fight with anyway.

 Gregg

 Sent from my iPhone

 On Dec 15, 2011, at 6:13 PM, Tim Cook t...@cook.ms wrote:

 Do you still need to do the grub install?
 On Dec 15, 2011 5:40 PM, Cindy Swearingen cindy.swearin...@oracle.com
 wrote:

 Hi Anon,

 The disk that you attach to the root pool will need an SMI label
 and a slice 0.

 The syntax to attach a disk to create a mirrored root pool
 is like this, for example:

 # zpool attach rpool c1t0d0s0 c1t1d0s0

 Thanks,

 Cindy

 On 12/15/11 16:20, Anonymous Remailer (austria) wrote:


 On Solaris 10 If I install using ZFS root on only one drive is there a
 way
 to add another drive as a mirror later? Sorry if this was discussed
 already. I searched the archives and couldn't find the answer. Thank you.
 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Andrew Gabriel

On 12/16/11 07:27 AM, Gregg Wonderly wrote:
Cindy, will it ever be possible to just have attach mirror the 
surfaces, including the partition tables?  I spent an hour today 
trying to get a new mirror on my root pool.  There was a 250GB disk 
that failed.  I only had a 1.5TB handy as a replacement.  prtvtoc ... 
| fmthard does not work in this case


Can you be more specific why it fails?
I have seen a couple of cases, and I'm wondering if you're hitting the 
same thing.

Can you post the prtvtoc output of your original disk please?

and so you have to do the partitioning by hand, which is just silly to 
fight with anyway.


Gregg


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Anonymous Remailer (austria)

Thank you all for your answers and links :-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-16 Thread Robert Milkowski


 -Original Message-
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Pawel Jakub Dawidek
 Sent: 10 December 2011 14:05
 To: Mertol Ozyoney
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup
 
 On Wed, Dec 07, 2011 at 10:48:43PM +0200, Mertol Ozyoney wrote:
  Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
 
  The only vendor i know that can do this is Netapp
 
 And you really work at Oracle?:)
 
 The answer is definiately yes. ARC caches on-disk blocks and dedup just
 reference those blocks. When you read dedup code is not involved at all.
 Let me show it to you with simple test:
 
 Create a file (dedup is on):
 
   # dd if=/dev/random of=/foo/a bs=1m count=1024
 
 Copy this file so that it is deduped:
 
   # dd if=/foo/a of=/foo/b bs=1m
 
 Export the pool so all cache is removed and reimport it:
 
   # zpool export foo
   # zpool import foo
 
 Now let's read one file:
 
   # dd if=/foo/a of=/dev/null bs=1m
   1073741824 bytes transferred in 10.855750 secs (98909962 bytes/sec)
 
 We read file 'a' and all its blocks are in cache now. The 'b' file shares
all the
 same blocks, so if ARC caches blocks only once, reading 'b' should be much
 faster:
 
   # dd if=/foo/b of=/dev/null bs=1m
   1073741824 bytes transferred in 0.870501 secs (1233475634
 bytes/sec)
 
 Now look at it, 'b' was read 12.5 times faster than 'a' with no disk
activity.
 Magic?:)


Yep, however in pre Solaris 11 GA (and in Illumos) you would end up with 2x
copies of blocks in ARC cache, while in S11 GA ARC will keep only 1 copy of
all blocks. This can make a big difference if there are even more than just
2x files being dedupped and you need arc memory to cache other data as well.

-- 
Robert Milkowski



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen

Hi Tim,

No, in current Solaris releases the boot blocks are installed
automatically with a zpool attach operation on a root pool.

Thanks,

Cindy

On 12/15/11 17:13, Tim Cook wrote:

Do you still need to do the grub install?

On Dec 15, 2011 5:40 PM, Cindy Swearingen cindy.swearin...@oracle.com
mailto:cindy.swearin...@oracle.com wrote:

Hi Anon,

The disk that you attach to the root pool will need an SMI label
and a slice 0.

The syntax to attach a disk to create a mirrored root pool
is like this, for example:

# zpool attach rpool c1t0d0s0 c1t1d0s0

Thanks,

Cindy

On 12/15/11 16:20, Anonymous Remailer (austria) wrote:


On Solaris 10 If I install using ZFS root on only one drive is
there a way
to add another drive as a mirror later? Sorry if this was discussed
already. I searched the archives and couldn't find the answer.
Thank you.
_
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen

Hi Gregg,

Yes, fighting with partitioning is just silly.

Santa will bring us bootable GPT/EFI labels in the coming year
is my wish so you will be able to just attach disks to root
pools.

Send us some output so we can see what the trouble is.

In the meantime, the links below might help.

Thanks,

Cindy


http://docs.oracle.com/cd/E23824_01/html/821-1459/disksprep-34.html

http://docs.oracle.com/cd/E23824_01/html/821-1459/diskssadd-2.html#diskssadd-5

http://docs.oracle.com/cd/E23824_01/html/821-1459/disksxadd-2.html#disksxadd-30



On 12/16/11 00:27, Gregg Wonderly wrote:

Cindy, will it ever be possible to just have attach mirror the surfaces,
including the partition tables? I spent an hour today trying to get a
new mirror on my root pool. There was a 250GB disk that failed. I only
had a 1.5TB handy as a replacement. prtvtoc ... | fmthard does not work
in this case and so you have to do the partitioning by hand, which is
just silly to fight with anyway.

Gregg

Sent from my iPhone

On Dec 15, 2011, at 6:13 PM, Tim Cook t...@cook.ms mailto:t...@cook.ms
wrote:


Do you still need to do the grub install?

On Dec 15, 2011 5:40 PM, Cindy Swearingen
cindy.swearin...@oracle.com mailto:cindy.swearin...@oracle.com wrote:

Hi Anon,

The disk that you attach to the root pool will need an SMI label
and a slice 0.

The syntax to attach a disk to create a mirrored root pool
is like this, for example:

# zpool attach rpool c1t0d0s0 c1t1d0s0

Thanks,

Cindy

On 12/15/11 16:20, Anonymous Remailer (austria) wrote:


On Solaris 10 If I install using ZFS root on only one drive is
there a way
to add another drive as a mirror later? Sorry if this was
discussed
already. I searched the archives and couldn't find the answer.
Thank you.
_
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Gregg Wonderly
The issue is really quite simple.   The solaris install, on x86 at least, 
chooses to use slice-0 for the root partition.  That slice is not created by a 
default format/fdisk, and so we have the web strewn with


prtvtoc path/to/old/slice2 | fmthard -s - path/to/new/slice2

As a way to cause the two commands to access the entire disk.   If you have to 
use dissimilar sized disks because 1) that's the only media you have, or 2) you 
want to increase the size of your root pool, then all we end up with, is an 
error message about overlapping partitions and no ability to make progress.


If I then use dd if=/dev/zero to erase the front of the disk, and the fire up 
format, select fdisk, say yes to create solaris2 partitioning, and then use 
partition to add a slice 0, I will have problems getting the whole disk in play.


So, the end result, is that I have to jump through hoops, when in the end, I'd 
really like to just add the whole disk, every time.  If I say


zpool attach rpool c8t0d0s0 c12d1

I really do mean the whole disk, and I'm not sure why it can't just happen.  
Failing to type a slice reference, is no worse of a 'typo' than typing 's2' by 
accident, because that's what I've been typing with all the other commands to 
try and get the disk partitioned.


I just really think there's not a lot of value in all of this, especially with 
ZFS, where we can, in fact add more disks/vdevs to a keep expanding space, and 
extremely rarely is that going to be done, for the root pool, with fractions of 
disks.


The use of SMI and absolute refusal to use EFI partitioning plus all of this 
just stacks up to a pretty large barrier to simple and/or easy administration.


I'm very nervous when I have a simplex filesystem setting there, and when a disk 
has died, I'm doubly nervous that the other half is going to fall over.


I'm not trying to be hard nosed about this, I'm just trying to share my angst 
and frustration with the details that drove me in that direction.


Gregg Wonderly

On 12/16/2011 2:56 AM, Andrew Gabriel wrote:

On 12/16/11 07:27 AM, Gregg Wonderly wrote:
Cindy, will it ever be possible to just have attach mirror the surfaces, 
including the partition tables?  I spent an hour today trying to get a new 
mirror on my root pool.  There was a 250GB disk that failed.  I only had a 
1.5TB handy as a replacement.  prtvtoc ... | fmthard does not work in this case


Can you be more specific why it fails?
I have seen a couple of cases, and I'm wondering if you're hitting the same 
thing.

Can you post the prtvtoc output of your original disk please?

and so you have to do the partitioning by hand, which is just silly to fight 
with anyway.


Gregg




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen

Yep, well said, understood, point taken, I hear you, you're
preaching to the choir. Have faith in Santa.

A few comments:

1. I need more info on the x86 install issue. I haven't seen this
problem myself.

2. We don't use slice2 for anything and its not recommended.

3. The SMI disk is a long-standing boot requirement. We're
working on it.

4. Both the s10 and s11 installer can create a mirrored root pool
so you don't have to do this manually.

If you do have do this manually in the S11 release, you can use
this shortcut to slap on a new label but it does no error checking
so make sure you have the right disk:

# format -L vtoc -d c1t0d0

Unfortunately, this applies the default partition table, which
might be a 129MB slice 0, so you still have to do the other 17 steps to
create one large slice 0. I filed an RFE to do something like this:

# format -L vtoc -a(ll) s0 c1t0d0

5. The overlapping partition error on x86 systems is a bug (unless they
really are overlapping) and you can override it by using the -f option.

Thanks,

Cindy

On 12/16/11 09:44, Gregg Wonderly wrote:

The issue is really quite simple. The solaris install, on x86 at least,
chooses to use slice-0 for the root partition. That slice is not created
by a default format/fdisk, and so we have the web strewn with

prtvtoc path/to/old/slice2 | fmthard -s - path/to/new/slice2

As a way to cause the two commands to access the entire disk. If you
have to use dissimilar sized disks because 1) that's the only media you
have, or 2) you want to increase the size of your root pool, then all we
end up with, is an error message about overlapping partitions and no
ability to make progress.

If I then use dd if=/dev/zero to erase the front of the disk, and the
fire up format, select fdisk, say yes to create solaris2 partitioning,
and then use partition to add a slice 0, I will have problems getting
the whole disk in play.

So, the end result, is that I have to jump through hoops, when in the
end, I'd really like to just add the whole disk, every time. If I say

zpool attach rpool c8t0d0s0 c12d1

I really do mean the whole disk, and I'm not sure why it can't just
happen. Failing to type a slice reference, is no worse of a 'typo'
than typing 's2' by accident, because that's what I've been typing with
all the other commands to try and get the disk partitioned.

I just really think there's not a lot of value in all of this,
especially with ZFS, where we can, in fact add more disks/vdevs to a
keep expanding space, and extremely rarely is that going to be done, for
the root pool, with fractions of disks.

The use of SMI and absolute refusal to use EFI partitioning plus all of
this just stacks up to a pretty large barrier to simple and/or easy
administration.

I'm very nervous when I have a simplex filesystem setting there, and
when a disk has died, I'm doubly nervous that the other half is going
to fall over.

I'm not trying to be hard nosed about this, I'm just trying to share my
angst and frustration with the details that drove me in that direction.

Gregg Wonderly

On 12/16/2011 2:56 AM, Andrew Gabriel wrote:

On 12/16/11 07:27 AM, Gregg Wonderly wrote:

Cindy, will it ever be possible to just have attach mirror the
surfaces, including the partition tables? I spent an hour today
trying to get a new mirror on my root pool. There was a 250GB disk
that failed. I only had a 1.5TB handy as a replacement. prtvtoc ... |
fmthard does not work in this case


Can you be more specific why it fails?
I have seen a couple of cases, and I'm wondering if you're hitting the
same thing.
Can you post the prtvtoc output of your original disk please?


and so you have to do the partitioning by hand, which is just silly
to fight with anyway.

Gregg




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SATA hardware advice

2011-12-16 Thread tono
I could use some help with choosing hardware for a storage server. For
budgetary and density reasons, we had settled on LFF SATA drives in the
storage server. I had closed in on models from HP (DL180 G6) and IBM
(x3630 M3), before discovering warnings against connecting SATA drives
with SAS expanders.

So I'd like to ask what's the safest way to manage SATA drives. We're
looking for a 12 (ideally 14) LFF server, 2-3U, similar to the above
models. The HP and IBM models both come with SAS expanders built into
their backplanes. My questions are:

1. Kludginess aside, can we build a dependable SMB server using
integrated HP or IBM expanders plus the workaround
(allow-bus-device-reset=0) presented here: 
http://gdamore.blogspot.com/2010/12/update-on-sata-expanders.html ?

2. Would it be better to find a SATA card with lots of ports, and make
1:1 connections? I found some cards (arc-128, Adaptec 2820SA) w/Solaris
support, for example, but I don't know how reliable they are or whether
they support a clean JBOD mode.

3. Assuming native SATA is the way to go, where should we look for
hardware? I'd like the IBM  HP options because of the LOM  warranty,
but I wouldn't think the hot-swap backplane offers any way to bypass the
SAS expanders (correct me if I'm wrong here!). I found this JBOD:
http://www.pc-pitstop.com/sata_enclosures/sat122urd.asp  I also know
about SuperMicro. Are there any other vendors or models worth
considering?

Thanks!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA hardware advice

2011-12-16 Thread Hung-Sheng Tsao (laoTsao)
imho, if possible pick sas 7200 hdd
no hw-raid for ZFS
mirror and with ZIL and good size memory


Sent from my iPad

On Dec 16, 2011, at 17:36, t...@ownmail.net wrote:

 I could use some help with choosing hardware for a storage server. For
 budgetary and density reasons, we had settled on LFF SATA drives in the
 storage server. I had closed in on models from HP (DL180 G6) and IBM
 (x3630 M3), before discovering warnings against connecting SATA drives
 with SAS expanders.
 
 So I'd like to ask what's the safest way to manage SATA drives. We're
 looking for a 12 (ideally 14) LFF server, 2-3U, similar to the above
 models. The HP and IBM models both come with SAS expanders built into
 their backplanes. My questions are:
 
 1. Kludginess aside, can we build a dependable SMB server using
 integrated HP or IBM expanders plus the workaround
 (allow-bus-device-reset=0) presented here: 
 http://gdamore.blogspot.com/2010/12/update-on-sata-expanders.html ?
 
 2. Would it be better to find a SATA card with lots of ports, and make
 1:1 connections? I found some cards (arc-128, Adaptec 2820SA) w/Solaris
 support, for example, but I don't know how reliable they are or whether
 they support a clean JBOD mode.
 
 3. Assuming native SATA is the way to go, where should we look for
 hardware? I'd like the IBM  HP options because of the LOM  warranty,
 but I wouldn't think the hot-swap backplane offers any way to bypass the
 SAS expanders (correct me if I'm wrong here!). I found this JBOD:
 http://www.pc-pitstop.com/sata_enclosures/sat122urd.asp  I also know
 about SuperMicro. Are there any other vendors or models worth
 considering?
 
 Thanks!
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA hardware advice

2011-12-16 Thread Edmund White
If you're building from scratch, please choose nearline/midline SAS disks
instead of SATA if you're looking for capacity. For detailed reasoning,
see: http://serverfault.com/a/331504/13325

For the server, I've had great success with HP ProLiant systems, focusing
on the DL380 G6/G7 models. If you can budget 4U of rackspace, the DL370 G6
is a good option that can accommodate 14LFF or 24 SFF disks (or a
combination). I've built onto DL180 G6 systems as well. If you do the
DL180 G6, you'll need a 12-bay LFF model. I'd recommend a Lights-Out 100
license key to gain remote console. The backplane has a built-in SAS
expander, so you'll only have a single 4-lane SAS cable to the controller.
I typically use LSI controllers. In the DL180, I would spec a LSI 9211-4i
SAS HBA. You have room to mount a ZIL or L2Arc internally and leverage the
motherboard SATA ports. Otherwise, consider a LSI 9211-8i HBA and use the
second 4-land SAS connector for those.

See: http://www.flickr.com/photos/ewwhite/sets/72157625918734321/ for an
example of the DL380 G7 build.

-- 
Edmund White
ewwh...@mac.com




On 12/17/11 12:24 AM, Hung-Sheng Tsao (laoTsao) laot...@gmail.com
wrote:

imho, if possible pick sas 7200 hdd
no hw-raid for ZFS
mirror and with ZIL and good size memory


Sent from my iPad

On Dec 16, 2011, at 17:36, t...@ownmail.net wrote:

 I could use some help with choosing hardware for a storage server. For
 budgetary and density reasons, we had settled on LFF SATA drives in the
 storage server. I had closed in on models from HP (DL180 G6) and IBM
 (x3630 M3), before discovering warnings against connecting SATA drives
 with SAS expanders.
 
 So I'd like to ask what's the safest way to manage SATA drives. We're
 looking for a 12 (ideally 14) LFF server, 2-3U, similar to the above
 models. The HP and IBM models both come with SAS expanders built into
 their backplanes. My questions are:
 
 1. Kludginess aside, can we build a dependable SMB server using
 integrated HP or IBM expanders plus the workaround
 (allow-bus-device-reset=0) presented here:
 http://gdamore.blogspot.com/2010/12/update-on-sata-expanders.html ?
 
 2. Would it be better to find a SATA card with lots of ports, and make
 1:1 connections? I found some cards (arc-128, Adaptec 2820SA) w/Solaris
 support, for example, but I don't know how reliable they are or whether
 they support a clean JBOD mode.
 
 3. Assuming native SATA is the way to go, where should we look for
 hardware? I'd like the IBM  HP options because of the LOM  warranty,
 but I wouldn't think the hot-swap backplane offers any way to bypass the
 SAS expanders (correct me if I'm wrong here!). I found this JBOD:
 http://www.pc-pitstop.com/sata_enclosures/sat122urd.asp  I also know
 about SuperMicro. Are there any other vendors or models worth
 considering?
 
 Thanks!
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA hardware advice

2011-12-16 Thread Fajar A. Nugraha
On Sat, Dec 17, 2011 at 6:48 AM, Edmund White ewwh...@mac.com wrote:
 If you can budget 4U of rackspace, the DL370 G6
 is a good option that can accommodate 14LFF or 24 SFF disks (or a
 combination). I've built onto DL180 G6 systems as well. If you do the
 DL180 G6, you'll need a 12-bay LFF model. I'd recommend a Lights-Out 100
 license key to gain remote console. The backplane has a built-in SAS
 expander, so you'll only have a single 4-lane SAS cable to the controller.
 I typically use LSI controllers. In the DL180, I would spec a LSI 9211-4i
 SAS HBA. You have room to mount a ZIL or L2Arc internally and leverage the
 motherboard SATA ports. Otherwise, consider a LSI 9211-8i HBA and use the
 second 4-land SAS connector for those.

 See: http://www.flickr.com/photos/ewwhite/sets/72157625918734321/ for an
 example of the DL380 G7 build.

I assume you bought the controller separately, not from HP, right? Are
there any other parts you need to buy separately? (e.g. cables)
How about the disks? are they from HP?


-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss