Bob La Quey wrote:
> On 5/22/07, Carl Lowenstein <[EMAIL PROTECTED]> wrote:
>> On 5/22/07, Lan Barnes <[EMAIL PROTECTED]> wrote:
>> >
>> > On Tue, May 22, 2007 4:00 pm, Carl Lowenstein wrote:
>> > > On 5/22/07, Lan Barnes <[EMAIL PROTECTED]> wrote:
>> >
>> > > SATA (Serial ATA) uses a different data transmission scheme
>> (serial on
>> > > 2 wires rather than parallel on 40 wires).  Probably faster in data
>> > > transmission, although the over-all data rate is limited by the disk
>> > > drive or the motherboard, whichever is faster.  It requires a
>> > > different controller on the motherboard, with the concomitant
>> > > difference in cables and connectors.  One can buy SATA controllers
>> > > that plug into the motherboard PCI backplane.
>> > >
>> > > So if you are upgrading an old system you probably don't want SATA.
>> > > Unless you are upgrading it by getting a new disk drive to put
>> into an
>> > > external box.  Then you can get a box that connects to the rest of
>> the
>> > > world by USB or Firewire or Ethernet, and has the right kind of
>> > > internals to fit your disk drive.
>> >
>> > And are you implying that PATA is the *ATA that is usually more
>> expensive?
>>
>> The disk drives are about the same price.  In fact, I just looked
>> again at frys.com and found that they have 500GB Maxtor drives in
>> either SATA or PATA flavor for the same $99.
>>
>> New construction should be also about the same price, possibly a bit
>> less for SATA because the construction overhead for the 4-wire cable
>> is less than for the 40 (which is really 80) cable.  Requires a custom
>> logic chip, but they are cheap to produce once the design has been
>> finished.
>>
>> If you are adding or replacing an internal disk in your existing
>> system, SATA will most likely not work because the motherboard wasn't
>> designed that way.  (My small Dell server just gets in under the wire,
>> having the classical 4 PATA ports and also 2 SATA).
>>
>> It isn't a problem of expense, just of being compatible with an
>> existing system.
>>
>>     carl
>> -- 
> 
> One question in my mind was how much disk on each work station
> versus disk on the NAS (or other attached file store, e.g. ala Andrew's
> suggestions). My thought was to put aabout 100 GB on the workstations
> and a TB or so on the NAS. Not a lot of need for local hard disk. I would
> rather spend local money on RAM. Indeed I can almost see going diskless
> on the workstations and putting every workstation dollar into RAM with
> all the disk as NAS (or equivalent ala Andrew.)
> 
> I would think such a development work group would be a pretty
> common problem.
> 
> Thoughts?
> 
> As I think about it I wonder about a peer to peer logical RAID
> where each disk is on each workstation. Is there anything like that
> out there?
> 
> What if we consider the system as a loosely coupled cluster with multiple
> users? A model more like EC2 ... hmmm. I think I am getting carried away.

I add my vote to the prior remarks about SCM.

Regarding NAS, requiring/encouraging developers to use common storage
for their work environment makes it possible to implement automatic
backup, I suppose.

And emphasizing the shared environment, one might argue that it could be
easier to develop good (or at least documented) conventions and working
habits -- directory structures, filenaming, ???.

Anybody want to add other advantages? .. Disadvantages?

I guess tradition is to have NFS-mounted homedirs, eh? I myself have
never really worked that way. Can others comment on how this works out
in a development environment?

Are developers jealous of their private workspace?

Regards,
..jim


-- 
KPLUG-List@kernel-panic.org
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to