On 13 February 2016 at 08:50, Tinker <ti...@openmailbox.org> wrote:
> Hi,
>
> 1)
> http://www.openbsd.org/papers/asiabsdcon2010_softraid/softraid.pdf page 3
> "2.2 RAID 1" says that it reads "on a round-robin basis from all active
> chunks", i.e. read operations are spread evenly across disks.

Yes, that's still the case today:

http://bxr.su/o/sys/dev/softraid_raid1.c#sr_raid1_rw

345            rt = 0;
346ragain:
347            /* interleave reads */
348            chunk = sd->mds.mdd_raid1.sr1_counter++ %
349                sd->sd_meta->ssdi.ssd_chunk_no;
350            scp = sd->sd_vol.sv_chunks[chunk];
351            switch (scp->src_meta.scm_status) {

356            case BIOC_SDOFFLINE:

359                if (rt++ < sd->sd_meta->ssdi.ssd_chunk_no)
360                    goto ragain;

There are presently no optimisations in-tree, but the softraid
policies are so simple that it's really easy to hack it up to do
something else that you may want.

>
> Since then did anyone implement selective reading based on experienced read
> operation time, or a user-specified device read priority order?

That would make the code less readable!  :-)

>
>
> That would allow Softraid RAID1 based on 1 SSD mirror + 1 SSD mirror + 1 HDD
> mirror, which would give the best combination of IO performance and data
> security OpenBSD would offer today.

Not sure what'd be the practical point of such a setup.  Your writes
will still be limited by the slowest component, and IOPS specs are
vastly different between SSDs and HDDs.  (And modern SSDs are no
longer considered nearly as unreliable as they once were.)

>
> 2)
> Also if there's a read/write failure (or excessive time consumption for a
> single operation, say 15 seconds), will Softraid RAID1 learn to take the
> broken disk out of use?

A failure in a softraid1 chunk will result in the chunk being taken
offline.  (What constitutes a failure is most likely outside of
softraid's control.)

C.

Reply via email to