> > JRT> I was considering buying a small 1-2 GByte flash disk form my 
> > system so JRT> I was looking over what was available and didn't find 
> > what I wanted.
> > 
> > JRT> This should considerably speed up a typical desktop Linux system
> >  if used JRT> for swap. JRT> JRT> Perhaps SD cards are a better idea.
> > 
> > 
> > I thought these devices had a limited number of writes, and thus were
> >  a bad choice for things like swap?
> 
> They claim to have solved that issue.  There are plans to use flash
> memory for the buffer on a hard disk -- the hybrid disk drive.

I haven't been following these devices closely.  Did they really solve it,
or just get it good enough to get by for a few specific applications for a
couple years or whatever the life expectancy of consumer electronics is
these days?  I think the idea for the hybrid disk drive is to put the stuff
you need for booting and a few common apps on the semi. and then you don't
have to spin up the disk very often, for "typical" users.  Useful for a
laptop to reduce battery drain.

>  IAC,
> does the operating system still accumulate a list of bad blocks?

That undoubtably depends on what OS, and perhaps with the device driver.
Last I read, NetBSD did, but I don't think the list is saved across reboots.
And IIRC there is some system call to zero out the list.

> > Do you have main memory maxed out?
> 
> For now yes.  I plan to upgrade this year and will probably get a board
> that will hold 4 GBytes and probably start with 2 if I can afford it.

Memory prices vary *greatly* with the type of memory.  About 1.5 years ago
I got 2 GiB of DDR400 ECC for USD 124 /GiB including "free" shipping.
I wish my Alphas could use that kind of memory!

> Since Linux always uses all of main memory, system performance can
> always be improved by adding swap.

Adding swap allows you to run more/bigger processes.  If you are actually
using swap, adding additional swap devices will allow swap I/O to run
in parallel, which will help system performance.  Adding additional swap
devices to my AMD64 would not help performance at all, since with 2 GiB of
main memory it never swaps.

> > JRT> So far I could only find them with DMA 0 which seems to be 
> > limited by JRT> the card speed.  DMA 0 is 16 MB/sec.  The fastest CF 
> > (133x) is 20 MB/s. JRT> SDs come slightly faster at 22.5 MB/sec which
> >  is still slower than DMA 1.
> > 
> > A 7200 rpm SATA drive does 40 MB/s sustained at the slow end of the 
> > platters, and 65-70MB/s sustained at the fast end.
> 
> I was looking for something faster than IDE.

40-70 MB/s is faster than 20 MB/s.  You can get 10K RPM SATA drives.
I don't know what their throughput rates are.  On the other hand, if
you are concerned with paging performance as well as swapping, then
seek time and rotational latency come into play and most semi devices
would have faster access times.

> SCSI 320 approaches 100
> MB/s sustained but an 18 GByte and a controller would be rather expensive.

Yes, these days SCSI is aimed only at "enterprise" class machines, and priced
accordingly.  :-(  SATA is almost as good and a LOT less expensive.  The problem
is that the controllers aren't documented, so we don't have NCQ support.  
Without
NCQ support the write performance sucks.  (Unless you enjoy living dangerously
with the disk's write cache in write-back mode.)  We need FLOSS-friendly SATA
controllers.

Am I correct in thinking that a SATA controller would be easier and faster to
design than a graphics/video controller, and that it would cost significantly
less to get an ASIC fabbed?

> > Are you trying to avoid moving parts?
> 
> I'm not sure exactly what the design issues are with a disk less system.
>   Disks are getting rather small if space is an issue.

You need something to boot from.  In many cases you'll need data storage,
NFS is popular, but has it's problems.

> > Product idea: How about a PCIe card with 8 SATA ports and a 
> > documented protocol for NCQ?
> 
> That would be useful for a server, but for a workstation I think that 4
> HDs would be the practical limit for most uses.  Actually, you need a
> server size tower to take more than 4 3.5 inch hard drives.

A server?  You don't need a full height 19" rack to hold more than 4 3.5"
drives.  I have a machine used mostly as a DVR that could hold up to 18 3.5"
drives.  Seeking kills disk performance, having multiple disks can reduce
seeking and thus help performance a lot depending on what you're doing,

>  Also, it
> usually costs more for 4 drives at 1/4 the capacity each (for the same
> total capacity).

The last time I bought disks was in December.  Here are the numbers.
Seagate 7200 rpm 3.5" SATA disk prices from 2006-12, in USD:

750 GB                $329.99
250 GB  $ 81.99  x3 = $245.97   x4 = 327.96
difference            $ 84.02          2.03

So for a hair less money than one 750 GB drive, I could get 4 250 GB drives
and have 33% more capacity.
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to