comment below…
On Apr 18, 2013, at 5:17 AM, Edward Ned Harvey (openindiana)
openindi...@nedharvey.com wrote:
From: Timothy Coalson [mailto:tsc...@mst.edu]
Did you also compare the probability of bit errors causing data loss
without a complete pool failure? 2-way mirrors, when one device
From: Jay Heyl [mailto:j...@frelled.us]
Ah, that makes much more sense. Thanks for the clarification. Now that you
put it that way I have to wonder how I ever came under the impression it
was any other way.
I've gotten lost in the numerous mis-communications of this thread, but just to
From: Timothy Coalson [mailto:tsc...@mst.edu]
Did you also compare the probability of bit errors causing data loss
without a complete pool failure? 2-way mirrors, when one device
completely
dies, have no redundancy on that data, and the copy that remains must be
perfect or some data will
From: Jay Heyl [mailto:j...@frelled.us]
I now realize you're talking about 8 separate 2-disk
mirrors organized into a pool. mirror x1 y1 mirror x2 y2 mirror x3 y3...
Yup. That's normal, and the only way.
I also realize that almost every discussion I've seen online concerning
mirrors
On 04/17/2013 02:08 AM, Edward Ned Harvey (openindiana) wrote:
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
If you are IOPS constrained, then yes, raid-zn will be slower, simply
because any read needs to hit all data drives in the stripe.
Saso, I would expect you to know the answer
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Raid-Z indeed does stripe data across all
leaf vdevs (minus parity) and does so by splitting the logical block up
into equally sized portions.
Jay, there you have it. You asked why use mirrors, and you said you would use
raidz2 or
On Tue, Apr 16, 2013 at 5:49 PM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-04-17 02:10, Jay Heyl wrote:
Not to get into bickering about semantics, but I asked, Or am I wrong
about reads being issued in parallel to all the mirrors in the array?, to
which you replied, Yes, in normal case...
On 2013-04-17 20:09, Jay Heyl wrote:
reply. Unless the first device to answer returns garbage (something
that doesn't match the expected checksum), other copies are not read
as part of this request.
Ah, that makes much more sense. Thanks for the clarification. Now that you
put it that way I
On Wed, Apr 17, 2013 at 7:38 AM, Edward Ned Harvey (openindiana)
openindi...@nedharvey.com wrote:
You also said the raidz2 will offer more protection against failure,
because you can survive any two disk failures (but no more.) I would argue
this is incorrect (I've done the probability
On Wed, Apr 17, 2013 at 11:21 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-04-17 20:09, Jay Heyl wrote:
reply. Unless the first device to answer returns garbage (something
that doesn't match the expected checksum), other copies are not read
as part of this request.
Ah, that makes much
On Wed, Apr 17, 2013 at 12:57 PM, Timothy Coalson tsc...@mst.edu wrote:
On Wed, Apr 17, 2013 at 7:38 AM, Edward Ned Harvey (openindiana)
openindi...@nedharvey.com wrote:
You also said the raidz2 will offer more protection against failure,
because you can survive any two disk failures (but no
On Wed, Apr 17, 2013 at 5:38 AM, Edward Ned Harvey (openindiana)
openindi...@nedharvey.com wrote:
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Raid-Z indeed does stripe data across all
leaf vdevs (minus parity) and does so by splitting the logical block up
into equally sized
On 2013-04-17 21:25, Jay Heyl wrote:
It (finally) occurs to me that not all mirrors are created equal. I've been
assuming, and probably ignoring hints to the contrary, that what was being
compared here was a raid-z2 configuraton with a 2-way mirror composed of
two 8-disk vdevs. I now realize
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
It would be difficult to believe that 10Gbit Ethernet offers better
bandwidth than 56Gbit Infiniband (the current offering). The swiching
model is quite similar. The main reason why IB offers better latency
is a better HBA
-
From: Edward Ned Harvey (openindiana) openindi...@nedharvey.com
To: Discussion list for OpenIndiana openindiana-discuss@openindiana.org
Sent: Tue, 16 Apr 2013 10:49 AM
Subject: Re: [OpenIndiana-discuss] Recommendations for fast storage
(OpenIndiana-discuss Digest, Vol 33, Issue 20)
From: Bob
I am not an expert of this subject , but with respect to my readings in
some e-mails in different mailing lists and from some relevant pages in
Wikipedia about SSD drives , the following points are mentioned about SSD
disadvantages ( even for Enterprise labeled drives ) :
SSD units are very
On Mon, Apr 15, 2013 at 5:00 AM, Edward Ned Harvey (openindiana)
openindi...@nedharvey.com wrote:
So I'm just assuming you're going to build a pool out of SSD's, mirrored,
perhaps even 3-way mirrors. No cache/log devices. All the ram you can fit
into the system.
What would be the logic
On 2013-04-16 19:17, Mehmet Erol Sanliturk wrote:
I am not an expert of this subject , but with respect to my readings in
some e-mails in different mailing lists and from some relevant pages in
Wikipedia about SSD drives , the following points are mentioned about SSD
disadvantages ( even for
Mehmet Erol Sanliturk wrote:
I am not an expert of this subject , but with respect to my readings in
some e-mails in different mailing lists and from some relevant pages in
Wikipedia about SSD drives , the following points are mentioned about SSD
disadvantages ( even for Enterprise labeled
On Tue, Apr 16, 2013 at 11:54 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-04-16 20:30, Jay Heyl wrote:
What would be the logic behind mirrored SSD arrays? With spinning platters
the mirrors improve performance by allowing the fastest of the mirrors to
respond to a particular command to be
On Tue, 16 Apr 2013, Jay Heyl wrote:
It's actually not all that difficult to saturate a 6Gb/s pathway with ZFS
when there are multiple storage devices on the other end of that path. No
single HDD today is going to come close to needing that full 6Gb/s, but put
four or five of them hanging off
On 04/16/2013 10:57 PM, Bob Friesenhahn wrote:
On Tue, 16 Apr 2013, Jay Heyl wrote:
It's actually not all that difficult to saturate a 6Gb/s pathway with ZFS
when there are multiple storage devices on the other end of that path. No
single HDD today is going to come close to needing that full
On Tue, Apr 16, 2013 at 3:48 PM, Jay Heyl j...@frelled.us wrote:
My question about the rationale behind the suggestion of mirrored SSD
arrays was really meant to be more in relation to the question from the OP.
I don't see how mirrored arrays of SSDs would be effective in his
situation.
On 04/16/2013 11:25 PM, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 3:48 PM, Jay Heyl j...@frelled.us wrote:
My question about the rationale behind the suggestion of mirrored SSD
arrays was really meant to be more in relation to the question from the OP.
I don't see how mirrored arrays of
On Tue, Apr 16, 2013 at 4:29 PM, Sašo Kiselkov skiselkov...@gmail.comwrote:
If you are IOPS constrained, then yes, raid-zn will be slower, simply
because any read needs to hit all data drives in the stripe. This is
even worse on writes if the raidz has bad geometry (number of data
drives
On 04/16/2013 11:37 PM, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 4:29 PM, Sašo Kiselkov skiselkov...@gmail.comwrote:
If you are IOPS constrained, then yes, raid-zn will be slower, simply
because any read needs to hit all data drives in the stripe. This is
even worse on writes if the
On Tue, Apr 16, 2013 at 2:25 PM, Timothy Coalson tsc...@mst.edu wrote:
On Tue, Apr 16, 2013 at 3:48 PM, Jay Heyl j...@frelled.us wrote:
My question about the rationale behind the suggestion of mirrored SSD
arrays was really meant to be more in relation to the question from the
OP.
I
On Tue, Apr 16, 2013 at 4:44 PM, Sašo Kiselkov skiselkov...@gmail.comwrote:
On 04/16/2013 11:37 PM, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 4:29 PM, Sašo Kiselkov skiselkov...@gmail.com
wrote:
If you are IOPS constrained, then yes, raid-zn will be slower, simply
because any read
ZFS datablocks are also a power of two what means, that if you have
1,2,4,8,16,32,.. datadisks, every write is evenly spread over all disks.
If you add one disk ex from 8 to 9 datadisks, any one disk is not used on a
read/write.
Does that means, 9 datadisks are slower than 8 disks?
No, 9
clarification below...
On Apr 16, 2013, at 2:44 PM, Sašo Kiselkov skiselkov...@gmail.com wrote:
On 04/16/2013 11:37 PM, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 4:29 PM, Sašo Kiselkov skiselkov...@gmail.comwrote:
If you are IOPS constrained, then yes, raid-zn will be slower, simply
On Tue, 16 Apr 2013, Sašo Kiselkov wrote:
SATA and SAS are dedicated point-to-point interfaces so there is no
additive bottleneck with more drives as long as the devices are directly
connected.
Not true. Modern flash storage is quite capable of saturating a 6 Gbps
SATA link. SAS has an
On 2013-04-16 23:56, Jay Heyl wrote:
result in more devices being hit for both read and write. Or am I wrong
about reads being issued in parallel to all the mirrors in the array?
Yes, in normal case (not scrubbing which makes a point of reading
everything) this assumption is wrong. Writes do
On Tue, Apr 16, 2013 at 6:01 PM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-04-16 23:56, Jay Heyl wrote:
result in more devices being hit for both read and write. Or am I wrong
about reads being issued in parallel to all the mirrors in the array?
Yes, in normal case (not scrubbing which
On 2013-04-16 23:37, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 4:29 PM, Sašo Kiselkov skiselkov...@gmail.comwrote:
If you are IOPS constrained, then yes, raid-zn will be slower, simply
because any read needs to hit all data drives in the stripe. This is
even worse on writes if the raidz
On 2013-04-17 01:12, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 6:01 PM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-04-16 23:56, Jay Heyl wrote:
result in more devices being hit for both read and write. Or am I wrong
about reads being issued in parallel to all the mirrors in the array?
On 04/17/2013 12:08 AM, Richard Elling wrote:
clarification below...
On Apr 16, 2013, at 2:44 PM, Sašo Kiselkov skiselkov...@gmail.com wrote:
On 04/16/2013 11:37 PM, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 4:29 PM, Sašo Kiselkov
skiselkov...@gmail.comwrote:
If you are IOPS
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
If you are IOPS constrained, then yes, raid-zn will be slower, simply
because any read needs to hit all data drives in the stripe.
Saso, I would expect you to know the answer to this question, probably:
I have heard that raidz is more
On Tue, Apr 16, 2013 at 4:01 PM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-04-16 23:56, Jay Heyl wrote:
result in more devices being hit for both read and write. Or am I wrong
about reads being issued in parallel to all the mirrors in the array?
Yes, in normal case (not scrubbing which
From: Jay Heyl [mailto:j...@frelled.us]
So I'm just assuming you're going to build a pool out of SSD's, mirrored,
perhaps even 3-way mirrors. No cache/log devices. All the ram you can fit
into the system.
What would be the logic behind mirrored SSD arrays? With spinning platters
the
For the context of ZPL, easy answer below :-) ...
On Apr 16, 2013, at 4:12 PM, Timothy Coalson tsc...@mst.edu wrote:
On Tue, Apr 16, 2013 at 6:01 PM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-04-16 23:56, Jay Heyl wrote:
result in more devices being hit for both read and write. Or am I
From: Mehmet Erol Sanliturk [mailto:m.e.sanlit...@gmail.com]
SSD units are very vulnerable to power cuts during work up to complete
failure which they can not be used any more to complete loss of data .
If there are any junky drives out there that fail so dramatically, those are
junky and
On 2013-04-17 02:10, Jay Heyl wrote:
Not to get into bickering about semantics, but I asked, Or am I wrong
about reads being issued in parallel to all the mirrors in the array?, to
which you replied, Yes, in normal case... this assumption is wrong... but
reads should be in parallel. (Ellipses
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17/04/13 02:10, Jay Heyl wrote:
Not to get into bickering about semantics, but I asked, Or am I
wrong about reads being issued in parallel to all the mirrors in
the array?
Each read is issued only to a (lets say, random) disk in the mirror,
From: Wim van den Berge [mailto:w...@vandenberge.us]
multiple 10Gb uplinks
However the next system is going to be a little different. It needs to be
the absolute fastest iSCSI target we can create/afford.
So I'm just assuming you're going to build a pool out of SSD's, mirrored,
perhaps
From: Günther Alka a...@hfg-gmuend.de
I would think about the following
- yes, i would build that from SSD
- build the pool from multiple 10 disk Raid-Z2 vdevs,
Slightly out of topic but, what is the status of the TRIM command and zfs...?
JD
___
On 04/15/2013 03:30 PM, John Doe wrote:
From: Günther Alka a...@hfg-gmuend.de
I would think about the following
- yes, i would build that from SSD
- build the pool from multiple 10 disk Raid-Z2 vdevs,
Slightly out of topic but, what is the status of the TRIM command and zfs...?
ATM:
On Mon, 15 Apr 2013, Ong Yu-Phing wrote:
Working set of ~50% is quite large; when you say data analysis I'd assume
some sort of OLTP or real-time BI situation, but do you know the nature of
your processing, i.e. is it latency dependent or bandwidth dependent? Reason
I ask, is because I think
Hello,
We have been running OpenIndiana (and its various predecessors) as storage
servers in production for the last couple of years. Over that time the
majority of our storage infrastructure has been moved to Open Indiana to the
point where we currently serve (iSCSI, NFS and CIFS) about 1.2PB
I would think about the following
- yes, i would build that from SSD
- build the pool from multiple 10 disk Raid-Z2 vdevs,
- use as much RAM as possible to serve most of reads from RAM
example a dual socket 2011 system with 256 GB RAM
- if you need sync writes/ disabled LU write back cache,
On 04/14/2013 05:15 PM, Wim van den Berge wrote:
Hello,
We have been running OpenIndiana (and its various predecessors) as storage
servers in production for the last couple of years. Over that time the
majority of our storage infrastructure has been moved to Open Indiana to the
point where
On Apr 14, 2013, at 8:15 AM, Wim van den Berge w...@vandenberge.us wrote:
Hello,
We have been running OpenIndiana (and its various predecessors) as storage
servers in production for the last couple of years. Over that time the
majority of our storage infrastructure has been moved to Open
A heads up that 10-12TB means you'd need 11.5-13TB useable, assuming
you'd need to keep used storage 90% of total storage useable (or is
that old news now?).
So, using Saso's RAID5 config of Intel DC3700s in 3xdisk raidz1, that
means you'd need 21x Intel DC3700's at 800GB
52 matches
Mail list logo