Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-16 Thread Gregg Wonderly

On 11/10/2011 7:42 AM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of darkblue

1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis

I just want to say, this isn't supported hardware, and although many people 
will say they do this without problem, I've heard just as many people 
(including myself) saying it's unstable that way.

I recommend buying either the oracle hardware or the nexenta on whatever they 
recommend for hardware.

Definitely DO NOT run the free version of solaris without updates and expect it 
to be reliable.  But that's a separate issue.  I'm also emphasizing that even 
if you pay for solaris support on non-oracle hardware, don't expect it to be 
great.  But maybe it will be.
I think the key issue here, is whether this hardware will corrupt a pool or 
not.  Ultimately, the promise of ZFS, for me anyways, is that I can take disks 
to new hardware if/when needed.  I am not dependent on a controller or 
motherboard which provides some feature key to access the data on the disks.


Companies which sell key software, that you depend on working, generally have 
proven that software to work reliably on hardware which they might sell to make 
use of said software.


Apple's business model and success, for example is based on this fact, because 
they have a much smaller bug pool to consider.  Oracle hardware works out the 
same way.


I think supporting the development of ZFS is key to the next generation of 
storage solutions...  But, I don't need the class of hardware that Oracle wants 
me to pay for.  I need disks with 24/7 reliability.  I can wait till tomorrow to 
store something onto my server from my laptop/desktop.  Consumer/non-enterprise 
needs are quite different, and I don't think Oracle understands how to deal in 
the 1,000,000,000 potential customer marketplace.   They've had a hard enough 
time just working in the 100,000 customer marketplace.


Gregg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-11 Thread darkblue
2011/11/11 Jeff Savit jeff.sa...@oracle.com

  On 11/10/2011 06:38 AM, Edward Ned Harvey wrote:

  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss 
 zfs-discuss-boun...@opensolaris.org] On Behalf Of Jeff Savit

 Also, not a good idea for
 performance to partition the disks as you suggest.

  Not totally true.  By default, if you partition the disks, then the disk 
 write cache gets disabled.  But it's trivial to simply force enable it thus 
 solving the problem.


  Granted - I just didn't want to get into a long story. With a
 self-described 'newbie' building a storage server I felt the best advice is
 to keep as simple as possible without adding steps (and without adding
 exposition about cache on partitioned disks - but now that you brought it
 up, yes, he can certainly do that).

 Besides, there's always a way to fill up the 1TB disks :-) Besides the OS
 image, it could also store gold images for the guest virtual machines,
 maintained separately from the operational images.


how big of the solaris os'partition do you suggest?

regards, Jeff




 --


 *Jeff Savit* | Principal Sales Consultant
 Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog:
 http://blogs.oracle.com/jsavit
 Oracle North America Commercial Hardware
 Operating Environments  Infrastructure S/W Pillar
 2355 E Camelback Rd | Phoenix, AZ 85016




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-11 Thread Fajar A. Nugraha
On Fri, Nov 11, 2011 at 2:52 PM, darkblue darkblue2...@gmail.com wrote:
 I recommend buying either the oracle hardware or the nexenta on whatever
 they recommend for hardware.

 Definitely DO NOT run the free version of solaris without updates and
 expect it to be reliable.

 That's a bit strong.  Yes I do regularly update my supported (Oracle)
 systems, but I've never had problems with my own build Solaris Express
 systems.

 I waste far more time on (now luckily legacy) fully supported Solaris 10
 boxes!

 what does it mean?

It means some people have experienced problem on both supported and
unsupported solaris box, but using Oracle hardware would give you
higher chance of having less problem, since Oracle (supposedly) tests
their software on their hardware regularly to make sure they works
nicely.

 I am going to install solaris 10 u10 on this server.it that any problem
 about compatible?

As mentioned earlier, if you want fully-tested configuration, running
solaris on oracle hardware is a no-brainer choice.

Another alternative is using nexenta on hardware they certify, like
http://www.nexenta.com/corp/newsflashes/86-2010/728-nexenta-announces-supermicro-partnership
, since they've run enough tests on the combination.

Also, if you look at posts on this lists, the usual recommendation is
to use SAS disks instead of SATA for best performance and reliability.

 and which version of solaris or solaris derived do you suggest to build
 storage with the above hardware.

Why not the recently-released solaris 11?

And while we're on the subject, if using legal software is among your
concerns, and you don't have solaris support (something like
$2k/scoket/year, which is the only legal way to license solaris for
non-oracle hardware), why not use openindiana?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-11 Thread Ian Collins

On 11/11/11 08:52 PM, darkblue wrote:



2011/11/11 Ian Collins i...@ianshome.com mailto:i...@ianshome.com

On 11/11/11 02:42 AM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org
mailto:zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss- mailto:zfs-discuss-
boun...@opensolaris.org mailto:boun...@opensolaris.org]
On Behalf Of darkblue

1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis

I just want to say, this isn't supported hardware, and
although many people will say they do this without problem,
I've heard just as many people (including myself) saying it's
unstable that way.


I've never had issues with Supermicro boards.  I'm using a similar
model and everything on the board is supported.

I recommend buying either the oracle hardware or the nexenta
on whatever they recommend for hardware.

Definitely DO NOT run the free version of solaris without
updates and expect it to be reliable.


That's a bit strong.  Yes I do regularly update my supported
(Oracle) systems, but I've never had problems with my own build
Solaris Express systems.

I waste far more time on (now luckily legacy) fully supported
Solaris 10 boxes!


what does it mean?


Solaris 10 live upgrade is a pain in the arse!  It gets confused when you have 
lots of filesystems, clones and zones.


I am going to install solaris 10 u10 on this server.it 
http://server.it that any problem about compatible?
and which version of solaris or solaris derived do you suggest to 
build storage with the above hardware.


I'm running 11 Express now, upgrading to Solaris 11 this weekend.  Unless you 
have good reason to use Solaris 10, use Solaris 11 or OpenIndiana.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-11 Thread darkblue
2011/11/11 Ian Collins i...@ianshome.com

 On 11/11/11 08:52 PM, darkblue wrote:



 2011/11/11 Ian Collins i...@ianshome.com mailto:i...@ianshome.com


On 11/11/11 02:42 AM, Edward Ned Harvey wrote:

From: 
 zfs-discuss-bounces@**opensolaris.orgzfs-discuss-boun...@opensolaris.org

 mailto:zfs-discuss-bounces@**opensolaris.orgzfs-discuss-boun...@opensolaris.org
 
[mailto:zfs-discuss- mailto:zfs-discuss-
boun...@opensolaris.org 
 mailto:bounces@opensolaris.**orgboun...@opensolaris.org
 ]

On Behalf Of darkblue

1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis

I just want to say, this isn't supported hardware, and
although many people will say they do this without problem,
I've heard just as many people (including myself) saying it's
unstable that way.


I've never had issues with Supermicro boards.  I'm using a similar
model and everything on the board is supported.

I recommend buying either the oracle hardware or the nexenta
on whatever they recommend for hardware.

Definitely DO NOT run the free version of solaris without
updates and expect it to be reliable.


That's a bit strong.  Yes I do regularly update my supported
(Oracle) systems, but I've never had problems with my own build
Solaris Express systems.

I waste far more time on (now luckily legacy) fully supported
Solaris 10 boxes!


 what does it mean?


 Solaris 10 live upgrade is a pain in the arse!  It gets confused when you
 have lots of filesystems, clones and zones.


  I am going to install solaris 10 u10 on this server.it http://server.it
 that any problem about compatible?

 and which version of solaris or solaris derived do you suggest to build
 storage with the above hardware.


 I'm running 11 Express now, upgrading to Solaris 11 this weekend.  Unless
 you have good reason to use Solaris 10, use Solaris 11 or OpenIndiana.


I was once consider Openindiana, but it's still on development stage, I
don't know if this version(oi_151a) is stable enough for production usage

-- 
 Ian.

 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-11 Thread Jeff Savit

On 11/11/2011 01:02 AM, darkblue wrote:



2011/11/11 Jeff Savit jeff.sa...@oracle.com 
mailto:jeff.sa...@oracle.com


On 11/10/2011 06:38 AM, Edward Ned Harvey wrote:

From:zfs-discuss-boun...@opensolaris.org  
mailto:zfs-discuss-boun...@opensolaris.org  [mailto:zfs-discuss-
boun...@opensolaris.org  mailto:boun...@opensolaris.org] On Behalf Of 
Jeff Savit

Also, not a good idea for
performance to partition the disks as you suggest.

Not totally true.  By default, if you partition the disks, then the disk 
write cache gets disabled.  But it's trivial to simply force enable it thus 
solving the problem.


Granted - I just didn't want to get into a long story. With a
self-described 'newbie' building a storage server I felt the best
advice is to keep as simple as possible without adding steps (and
without adding exposition about cache on partitioned disks - but
now that you brought it up, yes, he can certainly do that).

Besides, there's always a way to fill up the 1TB disks :-) Besides
the OS image, it could also store gold images for the guest
virtual machines, maintained separately from the operational images.


how big of the solaris os'partition do you suggest?
That's one of the best things about ZFS and *not* putting separate pools 
on the same disk - you don't have to worry about sizing partitions. Use 
two of the rotating disks to install Solaris on a mirrored root pool 
(rpool). The OS build will take up a small portion of the 1TB usable 
data (and you don't want to go above 80% full so it's really 800GB 
effectively). You can use the remaining space in that pool for 
additional ZFS datasets to hold golden OS images, iTunes, backups, 
whatever. Or simply not worry about it and let there be unused space. 
Disk space is relatively cheap - complexity and effort are not. For all 
we know, the disk space you're buying is more than ample for the 
application and it might not even be worth devising the most 
space-efficient layout.  If that's not the case, then the next topic 
would be how to stretch capacity via clones, compression, and RAIDZn.


Along with several others posting here, I recommend you use Solaris 11 
rather than Solaris 10. A lot of things are much easier, such as 
managing boot environments and sharing file systems via NFS, CIFS, 
iSCSI, and there's a lot of added functionality.  I further (and 
strongly) endorse the suggestion of using a system from Oracle with 
supported OS and hardware, but I don't want to get into any arguments 
about hardware or licensing please.


regards, Jeff
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Jeff Savit
 
 Also, not a good idea for
 performance to partition the disks as you suggest.

Not totally true.  By default, if you partition the disks, then the disk write 
cache gets disabled.  But it's trivial to simply force enable it thus solving 
the problem.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of darkblue
 
 1 * XEON 5606
 1 * supermirco X8DT3-LN4F
 6 * 4G RECC RAM
 22 * WD RE3 1T harddisk
 4 * intel 320 (160G) SSD
 1 * supermicro 846E1-900B chassis

I just want to say, this isn't supported hardware, and although many people 
will say they do this without problem, I've heard just as many people 
(including myself) saying it's unstable that way.

I recommend buying either the oracle hardware or the nexenta on whatever they 
recommend for hardware.

Definitely DO NOT run the free version of solaris without updates and expect it 
to be reliable.  But that's a separate issue.  I'm also emphasizing that even 
if you pay for solaris support on non-oracle hardware, don't expect it to be 
great.  But maybe it will be.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of darkblue
 
 Why would you want your root pool to be on the SSD? Do you expect an
 extremely high I/O rate for the OS disks? Also, not a good idea for
 performance to partition the disks as you suggest.
 
  because the solaris os occuppied the whole 1TB disk is a waste
  and the RAM is only 24G, does this could handle such big cache(160G)?

Putting rpool on the SSD is a waste.  Instead of partitioning the SSD into 
cache  rpool, why not parttiion the 1TB HDD into something like 100G for 
rpool, and the rest for the main data pool?  It makes sense if you're using 
mirrors instead of raidz.  (I definitely recommend using mirrors instead of 
raidz for your system running VM's)



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Jeff Savit

On 11/10/2011 06:38 AM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jeff Savit

Also, not a good idea for
performance to partition the disks as you suggest.

Not totally true.  By default, if you partition the disks, then the disk write 
cache gets disabled.  But it's trivial to simply force enable it thus solving 
the problem.

Granted - I just didn't want to get into a long story. With a 
self-described 'newbie' building a storage server I felt the best advice 
is to keep as simple as possible without adding steps (and without 
adding exposition about cache on partitioned disks - but now that you 
brought it up, yes, he can certainly do that).


Besides, there's always a way to fill up the 1TB disks :-) Besides the 
OS image, it could also store gold images for the guest virtual 
machines, maintained separately from the operational images.


regards, Jeff



--


*Jeff Savit* | Principal Sales Consultant
Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog: 
http://blogs.oracle.com/jsavit

Oracle North America Commercial Hardware
Operating Environments  Infrastructure S/W Pillar
2355 E Camelback Rd | Phoenix, AZ 85016



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Ian Collins

On 11/11/11 02:42 AM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of darkblue

1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis

I just want to say, this isn't supported hardware, and although many people 
will say they do this without problem, I've heard just as many people 
(including myself) saying it's unstable that way.


I've never had issues with Supermicro boards.  I'm using a similar model 
and everything on the board is supported.

I recommend buying either the oracle hardware or the nexenta on whatever they 
recommend for hardware.

Definitely DO NOT run the free version of solaris without updates and expect it 
to be reliable.


That's a bit strong.  Yes I do regularly update my supported (Oracle) 
systems, but I've never had problems with my own build Solaris Express 
systems.


I waste far more time on (now luckily legacy) fully supported Solaris 10 
boxes!


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread darkblue
2011/11/11 Ian Collins i...@ianshome.com

 On 11/11/11 02:42 AM, Edward Ned Harvey wrote:

 From: 
 zfs-discuss-bounces@**opensolaris.orgzfs-discuss-boun...@opensolaris.org[mailto:
 zfs-discuss-
 boun...@opensolaris.org] On Behalf Of darkblue

 1 * XEON 5606
 1 * supermirco X8DT3-LN4F
 6 * 4G RECC RAM
 22 * WD RE3 1T harddisk
 4 * intel 320 (160G) SSD
 1 * supermicro 846E1-900B chassis

 I just want to say, this isn't supported hardware, and although many
 people will say they do this without problem, I've heard just as many
 people (including myself) saying it's unstable that way.


 I've never had issues with Supermicro boards.  I'm using a similar model
 and everything on the board is supported.

  I recommend buying either the oracle hardware or the nexenta on whatever
 they recommend for hardware.

 Definitely DO NOT run the free version of solaris without updates and
 expect it to be reliable.


 That's a bit strong.  Yes I do regularly update my supported (Oracle)
 systems, but I've never had problems with my own build Solaris Express
 systems.

 I waste far more time on (now luckily legacy) fully supported Solaris 10
 boxes!


what does it mean?
I am going to install solaris 10 u10 on this server.it that any problem
about compatible?
and which version of solaris or solaris derived do you suggest to build
storage with the above hardware.

 --
 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-09 Thread darkblue
hi, all
I am a newbie on ZFS, recently, my company is planning to build a
entry-level enterpirse storage server.
here is the hardware list:

1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis

this storage is going to serve:
1、100+ VMware and xen guest
2、backup storage

my original plan is:
1、create a mirror root within a pair of SSD, then partition one the them
for cache (L2ARC), Is this reasonable?
2、the other pair of SSD will be used for ZIL
3、I haven't got a clear scheme for the 22 WD disks.

any suggestion?
especially how to get No 1 step done?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-09 Thread Jeff Savit

Hi darkblue, comments in-line

On 11/09/2011 06:11 PM, darkblue wrote:

hi, all
I am a newbie on ZFS, recently, my company is planning to build a 
entry-level enterpirse storage server.

here is the hardware list:

1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis

this storage is going to serve:
1、100+ VMware and xen guest
2、backup storage

my original plan is:
1、create a mirror root within a pair of SSD, then partition one the 
them for cache (L2ARC), Is this reasonable?
Why would you want your root pool to be on the SSD? Do you expect an 
extremely high I/O rate for the OS disks? Also, not a good idea for 
performance to partition the disks as you suggest.



2、the other pair of SSD will be used for ZIL

How about using 1 pair of SSD for ZIL, and the other pair of SSD for L2ARC


3、I haven't got a clear scheme for the 22 WD disks.
I suggest a mirrored pool on the WD disks for a root ZFS pool, and the 
other 20 disks for a data pool (quite possibly also a mirror) that also 
incorporates the 4 SSD, using 2 each for ZIL and L2ARC.  If you want to 
isolate different groups of virtual disks then you could have other 
possibilities. Maybe split the 20 disks between guest virtual disks and 
a backup pool. Lots of possibilities.




any suggestion?
especially how to get No 1 step done? 
Creating the mirrored root pool is easy enough and install time - just 
save the SSD for the guest virtual disks.  All of this is in absence of 
the actual performance characteristics you expect, but that's a 
reasonable starting point.


I hope that's useful...  Jeff

--


*Jeff Savit* | Principal Sales Consultant
Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog: 
http://blogs.oracle.com/jsavit

Oracle North America Commercial Hardware
Operating Environments  Infrastructure S/W Pillar
2355 E Camelback Rd | Phoenix, AZ 85016



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-09 Thread darkblue
2011/11/10 Jeff Savit jeff.sa...@oracle.com

 **
 Hi darkblue, comments in-line


 On 11/09/2011 06:11 PM, darkblue wrote:

 hi, all
 I am a newbie on ZFS, recently, my company is planning to build a
 entry-level enterpirse storage server.
 here is the hardware list:

 1 * XEON 5606
 1 * supermirco X8DT3-LN4F
 6 * 4G RECC RAM
 22 * WD RE3 1T harddisk
 4 * intel 320 (160G) SSD
 1 * supermicro 846E1-900B chassis

 this storage is going to serve:
 1、100+ VMware and xen guest
 2、backup storage

 my original plan is:
 1、create a mirror root within a pair of SSD, then partition one the them
 for cache (L2ARC), Is this reasonable?

 Why would you want your root pool to be on the SSD? Do you expect an
 extremely high I/O rate for the OS disks? Also, not a good idea for
 performance to partition the disks as you suggest.

  because the solaris os occuppied the whole 1TB disk is a waste
 and the RAM is only 24G, does this could handle such big cache(160G)?

2、the other pair of SSD will be used for ZIL

How about using 1 pair of SSD for ZIL, and the other pair of SSD for L2ARC


 3、I haven't got a clear scheme for the 22 WD disks.

 I suggest a mirrored pool on the WD disks for a root ZFS pool, and the
 other 20 disks for a data pool (quite possibly also a mirror) that also
 incorporates the 4 SSD, using 2 each for ZIL and L2ARC.  If you want to
 isolate different groups of virtual disks then you could have other
 possibilities. Maybe split the 20 disks between guest virtual disks and a
 backup pool. Lots of possibilities.

 hmm, could you give me an example and more details info
suppose after mirror 20 hard disk,  we got a 10TB usage space, and 6TB will
be use for Vguest, 4TB will be use for backup purpose.
within 6TB space,3TB might throught iSCSI to XEN domU, the other 3TB might
throught NFS to VMware guest.
4TB might throught NFS for backup.
thanks in advanced.


 any suggestion?
 especially how to get No 1 step done?

 Creating the mirrored root pool is easy enough and install time - just
 save the SSD for the guest virtual disks.  All of this is in absence of the
 actual performance characteristics you expect, but that's a reasonable
 starting point.

 I hope that's useful...  Jeff

That is great, thanks Jeff


 --


 *Jeff Savit* | Principal Sales Consultant
 Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog:
 http://blogs.oracle.com/jsavit
 Oracle North America Commercial Hardware
 Operating Environments  Infrastructure S/W Pillar
 2355 E Camelback Rd | Phoenix, AZ 85016




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss