burst a mighty flame.
-Dante Alighieri
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
___
zfs-discuss mailing list
zfs
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
, while writes to the
ZIL/SLOG will be more random (in order to commit quickly)?
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
At some point you have to rely on your backups for the unexpected and
unforeseen. Make sure they are good!
Michael, nice reliability write up!
--
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, before it got corrupted by your
system, happens to be saved somewhere else before it reached your system.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
Has it been running long? Initially the numbers are way off. After a while
it settles down into something reasonable.
How many disks, and what size, are in your raidz2?
-Scott
On 9/29/10 8:36 AM, LIC mesh licm...@gmail.com wrote:
Is there any way to stop a resilver?
We gotta stop this
in) and restarts.
Never gets past 0.00% completion, and K resilvered on any LUN.
64 LUNs, 32x5.44T, 32x10.88T in 8 vdevs.
On Wed, Sep 29, 2010 at 11:40 AM, Scott Meilicke
scott.meili...@craneaerospace.com wrote:
Has it been running long? Initially the numbers are way off. After
, Scott Meilicke
scott.meili...@craneaerospace.com wrote:
OK, let me see if I have this right:
8 shelves, 1T disks, 24 disks per shelf = 192 disks
8 shelves, 2T disks, 24 disks per shelf = 192 disks
Each raidz is six disks.
64 raidz vdevs
Each iSCSI target is made up of 8 of these raidz vdevs (8
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool
over NFS was effectively zero for about 45 minutes. Currently the pool is still
reslivering, but for some reason I can access the file system now.
Resliver speed has been beaten to death I
I should add I have 477 snapshots across all files systems. Most of them are
hourly snaps (225 of them anyway).
On Sep 29, 2010, at 3:16 PM, Scott Meilicke wrote:
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool
over NFS
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Scott Meilicke | Enterprise Systems Administrator | Crane Aerospace
Electronics | +1 425-743-8153 | M: +1 206-406-2670
We value your opinion! How may we serve you better
I just realized that the email I sent to David and the list did not make the
list (at least as jive can see it), so here is what I sent on the 23rd:
Brilliant. I set those parameters via /etc/system, rebooted, and the pool
imported with just the –f switch. I had seen this as an option earlier,
I am running nexenta CE 3.0.3.
I have a file system that at some point in the last week went from a directory
per 'ls -l' to a special character device. This results in not being able to
get into the file system. Here is my file system, scott2, along with a new file
system I just created,
On 9/27/10 9:56 AM, Victor Latushkin victor.latush...@oracle.com wrote:
On Sep 27, 2010, at 8:30 PM, Scott Meilicke wrote:
I am running nexenta CE 3.0.3.
I have a file system that at some point in the last week went from a
directory per 'ls -l' to a special character device
When I do the calculations, assuming 300bytes per block to be conservative,
with 128K blocks, I get 2.34G of cache (RAM, L2ARC) per Terabyte of deduped
data. But block size is dynamic, so you will need more than this.
Scott
--
This message posted from opensolaris.org
Can I disable dedup on the dataset while the transfer is going on?
Yes. Only the blocks copied after disabling dedupe will not be deduped. The
stuff you have already copied will be deduped.
Can I simply Ctrl-C the procress to stop it?
Yes, you can do that to a mv process.
Maybe stop the
Hi Peter,
dedupe is pool wide. File systems can opt in or out of dedupe. So if multiple
file systems are set to dedupe, then they all benefit from using the same pool
of deduped blocks. In this way, if two files share some of the same blocks,
even if they are in different file systems, they
Craig,
3. I do not think you will get much dedupe on video, music and photos. I would
not bother. If you really wanted to know at some later stage, you could create
a new file system, enable dedupe, and copy your data (or a subset) into it just
to see. In my experience there is a significant
I had already begun the process of migrating my 134 boxes over to Nexenta
before Oracle's cunning plans became known. This just reaffirms my decision.
Us too. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Are there other file systems underneath daten/backups that have snapshots?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Another data point - I used three 15K disks striped using my RAID controller as
a slog for the zil, and performance went down. I had three raidz sata vdevs
holding the data, and my load was VMs, i.e. a fair amount of small, random IO
(60% random, 50% write, ~16k in size).
Scott
--
This
If these files are deduped, and there is not a lot of RAM on the machine, it
can take a long, long time to work through the dedupe portion. I don't know
enough to know if that is what you are experiencing, but it could be the
problem.
How much RAM do you have?
Scott
--
This message posted
Look again at how XenServer does storage. I think you will find it already has
a solution, both for iSCSI and NFS.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Reaching into the dusty regions of my brain, I seem to recall that since RAIDz
does not work like a traditional RAID 5, particularly because of variably sized
stripes, that the data may not hit all of the disks, but it will always be
redundant.
I apologize for not having a reference for this
Price? I cannot find it.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You might bring over all of your old data and snaps, then clone that into a new
volume. Bring your recent stuff into the clone. Since the clone only updates
blocks that are different than the underlying snap, you may see a significant
storage savings.
Two clones could even be made - one for
iSCSI writes require a sync to disk for every write. SMB writes get cached in
memory, therefore are much faster.
I am not sure why it is so slow for reads.
Have you tried comstar iSCSI? I have read in these forums that it is faster.
-Scott
--
This message posted from opensolaris.org
VMware will properly handle sharing a single iSCSI volume across multiple ESX
hosts. We have six ESX hosts sharing the same iSCSI volumes - no problems.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
At the time we had it setup as 3 x 5 disk raidz, plus a hot spare. These 16
disks were in a SAS cabinet, and the the slog was on the server itself. We are
now running 2 x 7 raidz2 plus a hot spare and slog, all inside the cabinet.
Since the disks are 1.5T, I was concerned about resliver times
My use case for opensolaris is as a storage server for a VM environment (we
also use EqualLogic, and soon an EMC CX4-120). To that end, I use iometer
within a VM, simulating my VM IO activity, with some balance given to easy
benchmarking. We have about 110 VMs across eight ESX hosts. Here is
I have used build 124 in this capacity, although I did zero tuning. I had about
4T of data on a single 5T iSCSI volume over gigabit. The windows server was a
VM, and the opensolaris box is on a Dell 2950, 16G of RAM, x25e for the zil, no
l2arc cache device. I used comstar.
It was being used
You will get much better random IO with mirrors, and better reliability when a
disk fails with raidz2. Six sets of mirrors are fine for a pool. From what I
have read, a hot spare can be shared across pools. I think the correct term
would be load balanced mirrors, vs RAID 10.
What kind of
One of the reasons I am investigating solaris for
this is sparse volumes and dedupe could really help
here. Currently we use direct attached storage on
the dom0s and allocate an LVM to the domU on
creation. Just like your example above, we have lots
of those 80G to start with please
It is hard, as you note, to recommend a box without knowing the load. How many
linux boxes are you talking about?
I think having a lot of space for your L2ARC is a great idea.
Will you mirror your SLOG, or load balance them? I ask because perhaps one will
be enough, IO wise. My box has one
I was planning to mirror them - mainly in the hope that I could hot swap a new
one in the event that an existing one started to degrade. I suppose I could
start with one of each and convert to a mirror later although the prospect of
losing either disk fills me with dread.
You do not need to
Apple users have different expectations regarding data loss than Solaris and
Linux users do.
Come on, no Apple user bashing. Not true, not fair.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
This is what I used:
http://wikis.sun.com/display/OpenSolarisInfo200906/How+to+Configure+iSCSI+Target+Ports
I distilled that to:
disable the old, enable the new (comstar)
* sudo svcadm disable iscsitgt
* sudo svcadm enable stmf
Then four steps (using my zfs/zpool info - substitute for yours):
Greg, I am using NetBackup 6.5.3.1 (7.x is out) with fine results. Nice and
fast.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
To be clear, you can do what you want with the following items (besides
your server):
(1) OpenSolaris LiveCD
(1) 8GB USB Flash drive
As many tapes as you need to store your data pools on.
Make sure the USB drive has a saved stream from your rpool. It should
also have a downloaded copy of
You might have to force the import with -f.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I don't think adding an SSD mirror to an existing pool will do much for
performance. Some of your data will surely go to those SSDs, but I don't think
the solaris will know they are SSDs and move blocks in and out according to
usage patterns to give you an all around boost. They will just be
Thanks Dan.
When I try the clone then import:
pfexec zfs clone
data01/san/gallardo/g...@zfs-auto-snap:monthly-2009-12-01-00:00
data01/san/gallardo/g-testandlab
pfexec sbdadm import-lu /dev/zvol/rdsk/data01/san/gallardo/g-testandlab
The sbdadm import-lu gives me:
sbdadm: guid in use
which
Sure, but that will put me back into the original situation.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
That is likely it. I create the volume using 2009.06, then later upgraded to
124. I just now created a new zvol, connected it to my windows server,
formatted, and added some data. Then I snapped the zvol, cloned the snap, and
used 'pfexec sbdadm create-lu'. When presented to the windows server,
I plan on filing a support request with Sun, and will try to post back with any
results.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I have a single zfs volume, shared out using COMSTAR and connected to a Windows
VM. I am taking snapshots of the volume regularly. I now want to mount a
previous snapshot, but when I go through the process, Windows sees the new
volume, but thinks it is blank and wants to initialize it. Any
Link aggregation can use different algorithms to load balance. Using L4 (IP
plus originating port I think), using a single client computer and the same
protocol (NFS), but different origination ports has allowed me to saturate both
NICS in my LAG. So yes, you just need more than one
It looks like there is not a free slot for a hot spare? If that is the case,
then it is one more factor to push towards raidz2, as you will need time to
remove the failed disk and insert a new one. During that time you don't want to
be left unprotected.
--
This message posted from
I think Y is such a variable and complex number it would be difficult to give a
rule of thumb, other than to 'test with your workload'.
My server, having three, five disk raidzs (striped) and an intel x25-e as a zil
can fill my two G ethernet pipes over NFS (~200MBps) during mostly sequential
Yes, a coworker lost a second disk during a rebuild of a raid5 and lost all
data. I have not had a failure, however when migrating EqualLogic arrays in and
out of pools, I lost a disk on an array. No data loss, but it concerns me
because during the moves, you are essentially reading and writing
It does 'just work', however you may have some file and/or file system
corruption if the snapshot was taken at the moment that your mac is updating
some files. So use the time slider function and take a lot of snaps. :)
--
This message posted from opensolaris.org
# 1. It may help to use 15k disks as the zil. When I tested using three 15k
disks striped as my zil, it made my workload go slower, even though it seems
like it should have been faster. My suggestion is to test it out, and see if it
helps.
#3. You may get good performance with an inexpensive
If the 7310s can meet your performance expectations, they sound much better
than a pair of x4540s. Auto-fail over, SSD performance (although these can be
added to the 4540s), ease of management, and a great front end.
I haven't seen if you can use your backup software with the 7310s, but from
I second the use of zilstat - very useful, especially if you don't want to mess
around with adding a log device and then having to destroy the pool if you
don't want the log device any longer.
On Nov 18, 2009, at 2:20 AM, Dushyanth wrote:
Just to clarify : Does iSCSI traffic from a Solaris
I am sorry that I don't have any links, but here is what I observe on my
system. dd does not do sync writes, so the ZIL is not used. iSCSI traffic does
sync writes (as of 2009.06, but not 2008.05), so if you repeat your test using
an iSCSI target from your system, you should see log activity.
Excellent! That worked just fine. Thank you Victor.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all,
I received my SSD, and wanted to test it out using fake zpools with files as
backing stores before attaching it to my production pool. However, when I
exported the test pool and imported, I get an error. Here is what I did:
I created a file to use as a backing store for my new pool:
I don't think so. But, you can clone at the ZFS level, and then just use the
vmdk(s) that you need. As long as you don't muck about with the other stuff in
the clone, the space usage should be the same.
-Scott
--
This message posted from opensolaris.org
Hi Jeremy,
I had a loosely similar problem with my 2009.06 box. In my case (which may not
be yours), working with support we found a bug that was causing my pool to
hang. I also got erroneous errors when I did a scrub ( 3 x 5 disk raidz). I am
using the same LSI controller. A sure fire way to
sigh
Thanks Frédéric, that is a very interesting read.
So my options as I see them now:
1. Keep the x25-e, and disable the cache. Performance should still be improved,
but not by a *whole* like, right? I will google for an expectation, but if
anyone knows off the top of their head, I would
Ed, your comment:
If solaris is able to install at all, I would have to acknowledge, I
have to shutdown anytime I need to change the Perc configuration, including
replacing failed disks.
Replacing failed disks is easy when PERC is doing the RAID. Just remove the
failed drive and replace with a
I have an Intel X25-E 32G in the mail (actually the kingston version), and
wanted to get a sanity check before I start.
System:
Dell 2950
16G RAM
16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no extra drive
slots, a single zpool.
svn_124, but with my zpool still running at
It is more cost, but a WAN Accelerator (Cisco WAAS, Riverbed, etc.) would be a
big help.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Requires a login...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
How can I verify if the ZIL has been disabled or not?
I am trying to see how much benefit I might get by using an SSD as a ZIL. I
disabled the ZIL via the ZFS Evil Tuning Guide:
echo zil_disable/W0t1 | mdb -kw
and then rebooted. However, I do not see any benefits for my NFS workload.
Thanks,
Thank you both, much appreciated.
I ended up having to put the flag into /etc/system. When I disabled the ZIL and
umount/mounted without a reboot, my ESX host would not see the NFS export, nor
could I create a new NFS connection from my ESX host. I could get into the file
system from the host
zfs share -a
Ah-ha! Thanks.
FYI, I got between 2.5x and 10x improvement in performance, depending on the
test. So tempting :)
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I think in theory the ZIL/L2ARC should make things nice and fast if your
workload includes sync requests (database, iscsi, nfs, etc.), regardless of the
backend disks. But the only sure way to know is test with your work load.
-Scott
--
This message posted from opensolaris.org
True, this setup is not designed for high random I/O, but rather lots of
storage with fair performance. This box is for our dev/test backend storage.
Our production VI runs in the 500-700 IOPS (80+ VMs, production plus dev/test)
on average, so for our development VI, we are expecting half of
Roch Bourbonnais Wrote:
100% random writes produce around 200 IOPS with a 4-6 second pause
around every 10 seconds.
This indicates that the bandwidth you're able to transfer
through the protocol is about 50% greater than the bandwidth
the pool can offer to ZFS. Since, this is is not sustainable,
This sounds like the same behavior as opensolaris 2009.06. I had several disks
recently go UNAVAIL, and the spares did not take over. But as soon as I
physically removed a disk, the spare started replacing the removed disk. It
seems UNAVAIL is not the same as the disk not being there. I wish
So what happens during the txg commit?
For example, if the ZIL is a separate device, SSD for this example, does it not
work like:
1. A sync operation commits the data to the SSD
2. A txg commit happens, and the data from the SSD are written to the spinning
disk
So this is two writes, correct?
Doh! I knew that, but then forgot...
So, for the case of no separate device for the ZIL, the ZIL lives on the disk
pool. In which case, the data are written to the pool twice during a sync:
1. To the ZIL (on disk)
2. From RAM to disk during tgx
If this is correct (and my history in this
So, I just re-read the thread, and you can forget my last post. I had thought
the argument was that the data were not being written to disk twice (assuming
no separate device for the ZIL), but it was just explaining to me that the data
are not read from the ZIL to disk, but rather from memory
Yes, I was getting confused. Thanks to you (and everyone else) for clarifying.
Sync or async, I see the txg flushing to disk starve read IO.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I only see the blocking while load testing, not during regular usage, so I am
not so worried. I will try the kernel settings to see if that helps if/when I
see the issue in production.
For what it is worth, here is the pattern I see when load testing NFS (iometer,
60% random, 65% read, 8k
You are completely off your rocker :)
No, just kidding. Assuming the virtual front-end servers are running on
different hosts, and you are doing some sort of raid, you should be fine.
Performance may be poor due to the inexpensive targets on the back end, but you
probably know that. A while
As I understand it, when you expand a pool, the data do not automatically
migrate to the other disks. You will have to rewrite the data somehow, usually
a backup/restore.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Roman, are you saying you want to install OpenSolaris on your old servers, or
make the servers look like an external JBOD array, that another server will
then connect to?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
You can try:
zpool iostat pool_name -v 1
This will show you IO on each vdev at one second intervals. Perhaps you will
see different IO behavior on any suspect drive.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Yes! That would be icing on the cake.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
My EqualLogic arrays do not disconnect when resizing volumes.
When I need to resize, on the Windows side I open the iSCSI control panel, and
get ready to click the 'logon' button. I then resize the volume on the
OpenSolaris box, and immediately after that is complete, on the Windows side,
Note - this has a mini PCIe interface, not PCIe.
I had the 64GB version in a Dell Mini 9. While it was great for it's small
size, low power and low heat characteristics (no fan on the Mini 9!), it was
only faster than the striped sata drives in my mac pro when it came to random
reads.
ZFS absolutely observes synchronous write requests (e.g. by NFS or a
database). The synchronous write requests do not benefit from the
long write aggregation delay so the result may not be written as
ideally as ordinary write requests. Recently zfs has added support
for using a SSD as a
You can use a separate SSD ZIL.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
which gap?
'RAID-Z should mind the gap on writes' ?
Message was edited by: thometal
I believe this is in reference to the raid 5 write hole, described here:
http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance
RAIDZ should avoid this via it's Copy on Write model:
For what it is worth, I too have seen this behavior when load testing our zfs
box. I used iometer and the RealLife profile (1 worker, 1 target, 65% reads,
60% random, 8k, 32 IOs in the queue). When writes are being dumped, reads drop
close to zero, from 600-700 read IOPS to 15-30 read IOPS.
On Tue, 30 Jun 2009, Bob Friesenhahn wrote:
Note that this issue does not apply at all to NFS
service, database
service, or any other usage which does synchronous
writes.
I see read starvation with NFS. I was using iometer on a Windows VM, connecting
to an NFS mount on a 2008.11 physical
I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI), and got
nearly identical results to having the disks on iSCSI:
iSCSI
IOPS: 1003.8
MB/s: 7.8
Avg Latency (s): 27.9
NFS
IOPS: 1005.9
MB/s: 7.9
Avg Latency (s): 29.7
Interesting!
Here is how the pool was behaving during the
Hi,
When you have a lot of random read/writes, raidz/raidz2 can be fairly slow.
http://blogs.sun.com/roch/entry/when_to_and_not_to
The recommendation is to break the disks into smaller raidz/z2 stripes, thereby
improving IO.
From the ZFS Best Practices Guide:
if those servers are on physical boxes right now i'd do some perfmon
caps and add up the iops.
Using perfmon to get a sense of what is required is a good idea. Use the 95
percentile to be conservative. The counters I have used are in the Physical
disk object. Don't ignore the latency counters
Isn't that section of the evil tuning guide you're quoting actually about
checking if the NVRAM/driver connection is working right or not?
Miles, yes, you are correct. I just thought it was interesting reading about
how syncs and such work within ZFS.
Regarding my NFS test, you remind me that
See this thread for information on load testing for vmware:
http://communities.vmware.com/thread/73745?tstart=0start=0
Within the thread there are instructions for using iometer to load test your
storage. You should test out your solution before going live, and compare what
you get with what
For ~100 people, I like Bob's answer. RAID 10 will get you lots of speed.
Perhaps RAID50 would be just fine for you as well and give your more space, but
without measuring, you won't be sure. Don't forget a hot spare (or two)!
Your MySQL database - will that generate a lot of IO?
Also, to
So how are folks getting around the NFS speed hit? Using SSD or battery backed
RAM ZILs?
Regarding limited NFS mounts, underneath a single NFS mount, would it work to:
* Create a new VM
* Remove the VM from inventory
* Create a new ZFS file system underneath the original
* Copy the VM to that
Generally, yes. Test it with your workload and see how it works out for you.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Both iSCSI and NFS are slow? I would expect NFS to be slow, but in my iSCSI
testing with OpenSolaris 2008.11, performance we reasonable, about 2x NFS.
Setup: Dell 2950 with a SAS HBA and SATA 3x5 raidz (15 disks, no separate ZIL),
iSCSI using vmware ESXi 3.5 software initiator.
Scott
--
This
The SATA drive will be your bottleneck, and you will lose any speed advantages
of the SAS drives, especially using 3 vdevs on a single SATA disk.
I am with Richard, figure out what performance you need, and build accordingly.
--
This message posted from opensolaris.org
100 matches
Mail list logo