RE: FW: 20TB Storage System

2003-09-02 Thread Max Clark

Just make sure you run UFS2, which is the default on -CURRENT because
UFS1 has
a 1TB limit.

- What's the limit with UFS2?

Are there major requirements to run FreeBSD 5.x or can I still run stable
with this?

Thanks,
Max

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: FW: 20TB Storage System

2003-09-02 Thread Petri Helenius
Poul-Henning Kamp wrote:

2) What is the maximum size of a filesystem that I can present to the host
OS using vinum/ccd? Am I limited anywhere that I am not aware of?
   

Good question, I'm not sure we currently know the exact barrier.

Just make sure you run UFS2, which is the default on -CURRENT because 
UFS1 has
a 1TB limit.

3) Could I put all 20TB on one system, or will I need two to sustain the IO
required?
   

Spreading it will give you more I/O bandwidth.

 

Can you say why? Usually putting more spindles into one pile gives you 
more I/O,
unless you have very evenly distributed sequential access in pattern you 
can predict
in advance.

Pete

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: FW: 20TB Storage System

2003-09-02 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, "Max Clark" writ
es:
>I know adding ccd/vinum to the equation will lower my IO throughput, but the
>question is... if I have an external hardware shelf with 3.5TB (16 250GB
>drives w/ Raid 5 from hardware) and I put a Raid 0 stripe across 3 of these
>shelves what would my expected loss of IO be?

The loss will mostly be from latency, but how much is impossible to
tell I think.

The statistics of this, even with my trusty old Erlang table would
still be too uncertain to be of any value.

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


RE: FW: 20TB Storage System

2003-09-02 Thread Max Clark
I know adding ccd/vinum to the equation will lower my IO throughput, but the
question is... if I have an external hardware shelf with 3.5TB (16 250GB
drives w/ Raid 5 from hardware) and I put a Raid 0 stripe across 3 of these
shelves what would my expected loss of IO be?

Thanks,
Max

-Original Message-
From: Poul-Henning Kamp [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 02, 2003 1:02 PM
To: Max Clark
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: FW: 20TB Storage System


In message <[EMAIL PROTECTED]>, "Max Clark"
writ
es:

>Given the above:
>1) What would my expected IO be using vinum to stripe the storage
enclosures
>detailed above?

That depends a lot on the applications I/O pattern, an I doubt a
precise prediction is possible.

In particular the FibreChannel is hard to predict the throughput off
because the various implementations seems to have each their own
peculiar quirks performance wise.

On a SEAGATE ST318452 disks, I see sequential transfer rates
at the outside rim of the disk of 58MB/sec.  If I stripe two of
them them with CCD I get 107MB/sec.

CCD has a better performance than Vinum where they compare.

RAID-5 and striping a large number of disks does not scale linearly
performance wise, in particular you _may_ see your average access
time drop somewhat, but there is by far no guarantee that it will
be better than the individual drive.

>2) What is the maximum size of a filesystem that I can present to the host
>OS using vinum/ccd? Am I limited anywhere that I am not aware of?

Good question, I'm not sure we currently know the exact barrier.

>3) Could I put all 20TB on one system, or will I need two to sustain the IO
>required?

Spreading it will give you more I/O bandwidth.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: FW: 20TB Storage System

2003-09-02 Thread Brooks Davis
[This isn't really a performance issue so I trimmed it.]

On Tue, Sep 02, 2003 at 12:48:29PM -0700, Max Clark wrote:
> I need to attach 20TB of storage to a network (as low cost as possible), I
> need to sustain 250Mbit/s or 30MByte/s of sustained IO from the storage to
> the disk.
> 
> I have found external Fibre Channel -> ATA 133 Raid enclosures. These
> enclosures will house 16 drives so with 250GB drives a total of 3.5TB each
> after a RAID 5 format. These enclosures have advertised sustained IO of
> 90-100MByte/s each.
> 
> One solution we are thinking about is to use a Intel XEON server with 3x FC
> HBA controller cards in the server each attached to a separate storage
> enclosure. In any event we would be required to use ccd or vinum to stripe
> multiple storage enclosures together to form one logical volume.
> 
> I can partition this system into two separate 10TB storage pools.
> 
> Given the above:
> 1) What would my expected IO be using vinum to stripe the storage enclosures
> detailed above?
> 2) What is the maximum size of a filesystem that I can present to the host
> OS using vinum/ccd? Am I limited anywhere that I am not aware of?

Paul Saab recently demonstated a 2.7TB ccd so you shouldn't hit any
major limits there (I'm not sure where the next barrier is, but it
should be a ways off).  I'm not sure about UFS.

> 3) Could I put all 20TB on one system, or will I need two to sustain the IO
> required?

In theory you should be able to do 250Mbps on a single system, but I'm
not sure how well you will do in practice.  You'll need to make sure you
have sufficent PCI bus bandwidth.

> 4) If you were building this system how would you do it? (The installed $/GB
> must be below $5.00 dollars).

If you are willing to accept the management overhead of multiple
volumes, you will have a hard time beating 5U 24-disk boxes with 3
8-port 3ware arrays of 300GB disks.  That gets you 6TB per box (due to
controler limitations restricting you to 2TB per controler) for a bit
under $15000 or $2.5/GB.  The raw read speed of the arrays is around
85MBps so each array easily meets your throughput requirements.  Since
you'd have 20 arrays in 4 machines, you'd easily get meet your bandwith
requirements.  If you can't accept multiple volumes, you may still be
able to use a configuration like this using either target mode drivers
or the disk over network GEOM module that was posted recently.

You will need to use 5.x to make this work.

-- Brooks


pgp0.pgp
Description: PGP signature


Re: FW: 20TB Storage System

2003-09-02 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, "Max Clark" writ
es:

>Given the above:
>1) What would my expected IO be using vinum to stripe the storage enclosures
>detailed above?

That depends a lot on the applications I/O pattern, an I doubt a
precise prediction is possible.

In particular the FibreChannel is hard to predict the throughput off
because the various implementations seems to have each their own
peculiar quirks performance wise.

On a SEAGATE ST318452 disks, I see sequential transfer rates
at the outside rim of the disk of 58MB/sec.  If I stripe two of
them them with CCD I get 107MB/sec.

CCD has a better performance than Vinum where they compare.

RAID-5 and striping a large number of disks does not scale linearly
performance wise, in particular you _may_ see your average access
time drop somewhat, but there is by far no guarantee that it will
be better than the individual drive.

>2) What is the maximum size of a filesystem that I can present to the host
>OS using vinum/ccd? Am I limited anywhere that I am not aware of?

Good question, I'm not sure we currently know the exact barrier.

>3) Could I put all 20TB on one system, or will I need two to sustain the IO
>required?

Spreading it will give you more I/O bandwidth.

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"