Comments inline.

Warren Togami wrote:
The cheapest server with this capability would be a dual Athlon with 2-4GB
of Registered ECC DDR SDRAM.


These kinds of systems are mega-powerful, and unless you have lotsa people doing lots of stuff all at the same time, it should probably be sufficient. This is also the biggest machine you can build/buy without having to go to really expensive quad machiens (which is currently the domain of the Intel Xeon processor and things like the IBM Power and Alpha archetectures). One of the nice things about X remote display though is that it is mostly (completely?) archetecture indepenendent. You could have something like a small Sun (small by comparison to other Sun stuff...) or big IBM or Alpha box remote displaying to normal Intel systems. Unfortunately, this does restrict you to mostly OSS software since commercial binaries are usually only for x86 (or maybe PPC if you're lucky).

A good way to find out what you need would be to have, say, 5-10 people use a desktop for what you're looking at, add up the loads at one time, then multiply to get the number of clients you'll have. This will probably be the maximum of what you'd ever need, but it's always better to be a little over powered than be under powered as you don't want people complaining about slowness, and you want to be ready for future software that may be more CPU/RAM hungry.

http://www.amdmb.com/article-display.php?ArticleID=183
Something like this motherboard.  Several different brands are available
other than Tyan.

http://www.crucial.com/store/PartSpecs.asp?imodule=CT6472Y265
Something like this RAM, although you would need to go with brands other
than Crucial in order to get larger than 512MB sized DIMMs.  You can fit
four of these cheap Crucial DIMMs into the Tyan motherboards for a
respectable 2GB of RAM.  I'd personally shell out the extra money and go for
1GB DIMMs and for 2-3GB, with banks open for a future upgrades if needed.


Watch out on the RAM issue. These Tyan motherboards are REALLY, and I mean *REALLY* picky about what RAM you put in them. I built one with decent RAM (registered, ECC, but from an off-brand) and the board will not run. Memtest86 SCROLLS ERRORS as fast as it can. Linux kernel panics on startup. Obviously the RAM isn't compatible. MAKE SURE THE RAM YOU GET IS ON THE TYAN COMPATIBILITY LIST!

http://www.3ware.com/
For your hard disks, use 3Ware 7000 series RAID controllers with regular IDE
hard drives.  You can buy several cheap 120 or 160GB drives and make RAID
1+0 or RAID 5 arrays for a fraction of the cost of SCSI RAID.  You get full
hotswap and hotspare capability, and larger storage sizes than SCSI.  Yes
SCSI can be faster, but you don't need that extra speed with LTSP.  You
mainly need massive capacity and failover redundancy.


I have little experience with 3ware cards, but I hear they work really well. You'll probably want RAID5 as it gives you a combination of speed/size (striping) and redundancy (parity, you can lose any one drive, and you can have a hot spare to immediately begin rebuilding without having to do anything such as swap out a drive or take the server down to do the swapping).

You might also want to consider hot swap sleds. This will allow you to swap out hard drives without shutting the box down, resulting in good availability. Unfortunately, this is mostly the domain of SCSI (SCA connectors are SCSI), but I recall reading about sleds for IDE drives that had adapters that you plugged into normal IDE hard drives (containing a normal Molex power cable and 40pin IDE).

DO NOT BUY Promise or Highpoint IDE RAID controllers.  They are very poorly
supported on Linux.  Very bad idea.

They're supported, but you might as well use Linux's software RAID, since that's what you're doing anyway with the Promise and Hightpoint controllers. I personally have a Hightpoint370 controller that I use all the time...as a standard single drive UDMA100 interface, but this is because my board has a BX chipset, so only UDMA33 off of the chipset.


Add several ethernet cards to spread out the bandwidth usage.  20 LTSP
clients can easily eat the bandwidth of a single 100mbit ethernet port.

http://www.dlink.com/products/adapters/dfe580tx/
This four port 100mbit ethernet card looks interesting, and appears to be
supported by the Linux tulip driver.


There are also a lot of cheap (less than $80) GIGABIT ethernet adapters out there that are decently supported by linux (copper of coruse). There was an article on slashdot a while back comparing the performance of them. The problem is, a switch with a gigabit uplink ain't cheap, but you should be getting a really nice switch for this anyway.

Another option is the "bonding" module that linux has. This lets you bond multiple interfaces together to act as one (requires a compatible switch, my 3com seems to work, some Ciscos are listed as compatible, but it will need trunking support and obviously must be managed to have the setup features required). What this does is round-robins at a per frame level, making it look like you've got one really high bandwidth interface. You could even bond multiple gigabit NICs together if you REALLY need a lot of bandwith (I find this unlikely for anything less than 20 clients).


----- Original Message -----
From: "Michael Ableyev" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, June 12, 2002 6:14 PM
Subject: [luau] LTSP hardware



I was wondering if anyone could make a suggestion for hardware for a LTSP
.......

Hope this helps some.

--MonMotha

Reply via email to