On Mar 25, 2012, at 6:26 AM, Jeff Bacon wrote:
>> In general, mixing SATA and SAS directly behind expanders (eg without
>> SAS/SATA intereposers) seems to be bad juju that an OS can't fix.
>
> In general I'd agree. Just mixing them on the same box can be problematic,
> I've noticed - though I thi
> In general, mixing SATA and SAS directly behind expanders (eg without
> SAS/SATA intereposers) seems to be bad juju that an OS can't fix.
In general I'd agree. Just mixing them on the same box can be problematic,
I've noticed - though I think as much as anything that the firmware
on the 3G/s exp
Thanks for sharing, Jeff!
Comments below...
On Mar 24, 2012, at 4:33 PM, Jeff Bacon wrote:
>> 2012-03-21 16:41, Paul Kraus wrote:
>>> I have been running ZFS in a mission critical application since
>>> zpool version 10 and have not seen any issues with some of the vdevs
>>> in a zpool full wh
> 2012-03-21 16:41, Paul Kraus wrote:
> > I have been running ZFS in a mission critical application since
> > zpool version 10 and have not seen any issues with some of the vdevs
> > in a zpool full while others are virtually empty. We have been running
> > commercial Solaris 10 releases. The
On Thu, 22 Mar 2012, Jim Klimov wrote:
I think that a certain Bob F. would disagree, especially
when larger native sectors and ashist=12 come into play.
Namely, one scenario where this is important is automated
storage of thumbnails for websites, or some similar small
objects in vast amounts.
2012-03-22 20:52, Richard Elling wrote:
Yes, but it is a rare case for 512b sectors.
> It could be more common for 4KB sector disks when ashift=12.
...
Were there any research or tests regarding storage of many small
files (1-sector sized or close to that) on different vdev layouts?
It is not
On Mar 22, 2012, at 3:03 AM, Jim Klimov wrote:
> 2012-03-21 22:53, Richard Elling wrote:
> ...
>>> This is why a single
>>> vdev's random-read performance is equivalent to the random-read
>>> performance of
>>> a single drive.
>>
>> It is not as bad as that. The actual worst case number for a HDD
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of maillist reader
>
> I read though that ZFS does not have a "defragmentation" tool, is this
still the
> case?
True.
> It would seem with such a performance difference between
> "sequential r
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Kraus
>
> Three are two different cases here... resilver to reconstruct
> data from a failed drive and a scrub to pro-actively find bad sectors.
>
> The best case situation for
2012-03-21 22:53, Richard Elling wrote:
...
This is why a single
vdev's random-read performance is equivalent to the random-read
performance of
a single drive.
It is not as bad as that. The actual worst case number for a HDD with
zfs_vdev_max_pending
of one is:
average IOPS * ((D+P) / D)
where,
On Wed, 21 Mar 2012, maillist reader wrote:
I read though that ZFS does not have a "defragmentation" tool, is this still
the case? It would seem with such a
performance difference between "sequential reads" and "random reads" for
raidzN's, a defragmentation tool would be
very high on ZFS's TOD
Thank you all for the information, I believe it is much clearer to me.
"Sequential Reads" should scale with the number of disks in the entire
zpool (regardless of amount of vdevs), and "Random Reads" will scale with
just the number of vdevs (aka idea I had before only applies to "Random
Reads"), wh
comments below...
On Mar 21, 2012, at 10:40 AM, Marion Hakanson wrote:
> p...@kraus-haus.org said:
>>Without knowing the I/O pattern, saying 500 MB/sec. is meaningless.
>> Achieving 500MB/sec. with 8KB files and lots of random accesses is really
>> hard, even with 20 HDDs. Achieving 500MB/sec
2012-03-21 21:40, Marion Hakanson цкщеу:
Small, random read performance does not scale with the number of drives in each
raidz[123] vdev because of the dynamic striping. In order to read a single
logical block, ZFS has to read all the segments of that logical block, which
have been spread out ac
p...@kraus-haus.org said:
> Without knowing the I/O pattern, saying 500 MB/sec. is meaningless.
> Achieving 500MB/sec. with 8KB files and lots of random accesses is really
> hard, even with 20 HDDs. Achieving 500MB/sec. of sequential streaming of
> 100MB+ files is much easier.
> . . .
> For
On Wed, Mar 21, 2012 at 9:51 AM, Jim Klimov wrote:
> 2012-03-21 17:28, Edward Ned Harvey wrote:
>> It's not advisable to put more than ~8 disks in a single vdev, because it
>> really hurts during resilver time. Maybe a week or two to resilver like
>> that.
>
> Yes, that's important to note also.
2012-03-21 17:28, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of MLR
...
Am I correct in thinking this means, for example, I have a single
>> 14 disk raidz2 vdev zpool,
It's not advisable to put more than ~8 disks
On Tue, Mar 20, 2012 at 11:16 PM, MLR wrote:
> 1. Cache device for L2ARC
> Say we get a decent ssd, ~500MB/s read/write. If we have a 20 HDD zpool
> setup shouldn't we be reading at least at the 500MB/s read/write range? Why
> would we want a ~500MB/s cache?
Without knowing the I/O patt
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of MLR
>
> c. 2vdev of 4 1.5TB disks (raidz). 3vdev of 4 3TB disks (raidz)?
(~500MB/s
> reading, maximize vdevs for performance)
If possible, spread your vdev's across 4 different controllers
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of MLR
>
> Say we get a decent ssd, ~500MB/s read/write. If we have a 20 HDD
zpool
> setup shouldn't we be reading at least at the 500MB/s read/write range?
> Why
> would we want a ~500MB/s c
2012-03-21 16:41, Paul Kraus wrote:
I have been running ZFS in a mission critical application since
zpool version 10 and have not seen any issues with some of the vdevs
in a zpool full while others are virtually empty. We have been running
commercial Solaris 10 releases. The configuration wa
On Wed, Mar 21, 2012 at 7:56 AM, Jim Klimov wrote:
> 2012-03-21 7:16, MLR wrote:
> One thing to note is that many people would not recommend using
> a "disbalanced" ZFS array - one expanded by adding a TLVDEV after
> many writes, or one consisting of differently-sized TLVDEVs.
>
> ZFS does a rath
2012-03-21 7:16, MLR wrote:
I read the "ZFS_Best_Practices_Guide" and "ZFS_Evil_Tuning_Guide", and have some
questions:
1. Cache device for L2ARC
Say we get a decent ssd, ~500MB/s read/write. If we have a 20 HDD zpool
setup shouldn't we be reading at least at the 500MB/s read/write range
I read the "ZFS_Best_Practices_Guide" and "ZFS_Evil_Tuning_Guide", and have
some
questions:
1. Cache device for L2ARC
Say we get a decent ssd, ~500MB/s read/write. If we have a 20 HDD zpool
setup shouldn't we be reading at least at the 500MB/s read/write range? Why
would we want a ~500MB
24 matches
Mail list logo