Hi Aryan,
At my uni about 95% of the workstations (which is about 3000 ws's) are xterminals. Basically dumb terminals that have nothing other than a n/w bootloader, ethernet-card, small amount of ram (about 16mb), a bus that runs through all the other devices in the system, a very large 21" flat screen monitor an SGI kbd and mouse, simple true-color graphics interface for running the monitor, 1 small-n-simple PC speaker and 1 tiny power adaptor the size of a matchbox, no cpu, no PCI slots, no power or ide or scsi cables, no controllers, no huge heatsinks or noisy fans. the whole box is about the size of a video cassette and can be strapped to the underpart of the desk it will be placed on, they need no interaction from the user, they are always on. They are in my opinion the perfect solution to mass computing. The servers that service them are Sun stations, each of which have 64 128- bit UltraSparcs, with I think 6.4Gb RAM and 2 TB of HDD each, I have not once ever seen any of the xterm labs go down, each xterm is directly connected to the servers via gigabit switches, and you're capable of getting a variety of different OSs due to Citrix meta-frames. At the moment the server can provide: 1. Solaris 9.0 2. OpenBSD 3. FreeBSD 4. RH Enterprise 9.0 5. Windows 2k Prof 6. DOS 5.0 However a setup like this costs A LOT of money and this setup is only available in the CS, SE, EE departments at my uni, the library systems run a similar system to that of yours but under NT and well I've never ventured into other departments except for the chemistry and math departments and they seem to be using win2k. But I think the win2k licenses came from microsoft at a very cheap price, so cheap in fact that they were paying the uni to provide win2k to the students. every student regardless of what department they are in has the right to obtain 1win2k prof 1 winxp prof 1 visual studio enterprise 1 winserver2003, 1 office xp prof license for free and do with it as they please. I sold mine for a neat little profit of $2800AUD :) The majority of the costs as I see it come from maintaining the system. a whole technical services department is dedicated to running just the CS side of the system, I think it is comprised of 15 full-time professional sys- admins and about 40 part-time students. Also citrix meta-frames licensing is another cost, but I think its worth it, cause there isn't an open-source equivalent to it. an advantage to such a system is heat, in the labs that have PCs, it can get really hot even if the air-conditioning is turned on especially around assignment deadlines etc. however in the xterm labs its always cool and serene. another advantage is if people are doing Client-Server subjects, they can easily build and test their code or scripts on such an architecture without having to get permission to install stuff on different parts of the OS, cause the only OS you can modify is your own one, so if you stuff up you wont be modifying anyone else's view of the OS. Its works like VM in a way, the system takes a snapshot of your whole system state when you log-out and brings it back up when you log in, if of course you stuff up your setup, you can get a clean setup and replace lost work via the snapshots system. another advantage is that you don't have to be at uni to use the system you can log-on at home via ssh and use either console terminal mode or if you have X installed and a really fast internet connection you can go into graphical mode in any case you get the same result and you don't even have to be on a unix system you can be using windows and use something like vnc to get a meta-frame of your desired OS. The only major disadvantage I saw was that some subjects like AI that have assignments that require doing things like A* searches of a search space, or people doing 3D graphics really put a huge load on the system. Last year the IT departments made 1 room of PCs running RH 8.something totally dedicated for 3d graphics and cluster computing, subsequently they removed the ogl and mpi headers and runtimes from gcc on the major system, and also limited the number of processes you can have running to 60. which since then has lifted a great load off the major servers, because before 3D rendering for things like opengl or MPI overheads were handled by the servers, now its being done by the GForce4s and the Dual P4s found in the PCs themselves. However you still have people running GAs and other AI related stuff and you always get 1st years that are doing CS theory running example code of TSPs and such, seeing if they are really factorially complex. I'm guilty of such things also, or was would be right, but the system has policies for CPU time and such, and the more the sys-admins refine these policies the better the system gets. There is another disadvantage and that is mainly related to assignments where things are time-critical (distributed servers are not real-time OSs), like a DB and File structures assignment I had a few years back, we were asked to build a large file (1Gb) of records and access them in a sequential and also random fashion to demonstrate the caching properties of the OS, however when you have many people doing the same assignment such as this, you get issues where no caching is being done at all because the system can't decide what to cache and what not to cache. each file was different and the OS has only a finite amount of memory. And the assignment had to be done on the particular Sun systems cause they had nano-second resolution timing and also to keep the results of everyone consistent. I would see different results of my trials when I ran them during the day and when I ran them in the mornings at about 3am (which was most likely due to user load). IMHO no one really needs a PC. What I mean is that the typical legacy architecture of CPU,RAM,HDD etc... all you need are ways of inputting data, kbd, mouse, CD-Drive (which btw we don't have CD-Ds on the xterms at uni however a newer version of the xterms called SunRays have DVD-CD players, but if we need info that's on a CD to get onto our accounts there are labs that have PCs with CDs that have access to the major system), and ways of getting output, monitor, soundcard and printer. And as far as uni's go, all you need are kbd, mice, monitors and printers. subjects that require sound cards and high-end graphics should have their own separate labs, that have dedicated PCs but still have a log-in scheme which is equivalent to that of the major system. this kind of setup is not possible under windows, so a unix-linux combination is what you need. But again it comes back to money, and you need lots of it, and you need it up front, however now I think now would be a good time to invest in such systems cause sun has heavily reduced its prices on enterprise systems. On the other hand looking at the specs you've given for your labs it doesn't seem like your uni is really interested in upgrading technology. Now coming to x86, mmmmm... as servers this architecture has a lot to be desired from. Even in clustered environments their over-heads are huge. and in any case other than specially built mobos your not gonna find mobos that can handle anything more than dual cpus. Also they don't employ vectoring in their processing buses, and rather rely heavily on interrupt driven events, which when you think of servers that's a lot of over-head. x86 type servers will always be cheaper than Crays(not made anymore :() and SunStations, and probably cost less to admin them cause there are more people certified for x86 platform than there are for US platforms. Well I've given you the layout at my uni. It services a max load of about 3500 students, with different OSs and different services. As indicated before you gotta know up front what the system is going to be used for what kind of assignments will students be doing on them, will they be using packages like Matlab, mathematica these are the really important things, and I don't think you will be able to answer these questions by yourself, only the people teaching the courses will be able to answer these things, also remember a uni computer system is just that, its a system to provide computational services for uni efforts and nothing else, well at least that's what the finance department of the uni would like to hear when they eventually sign over money. they don't want to hear that it can also be used for things other than uni work for which they can't derive a profit. Also a very sinister aspect of large computer setups, I haven't seen this at my uni, but at another the IT technical group intentionally advocated setting up Windows98 and WindowsME in lieu of more stable systems such as linux or even win2k so that they could guarantee their employment. be careful of this fact, because you may think you are helping people but you could also be making enemies in the process. Anywayz if you want more specific info on performance and prices e-mail me and I'll talk to the sys-admins. Regards Arash __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net ----- Original Message ----- From: "Aryan Ameri" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Wednesday, October 29, 2003 10:43 AM Subject: [linuxiran] Implementing Thin Clients > Hi there: > > I am here to ask for help, advice, or anything you can give me. > Here in the University, Winodws rules the desktop. It's not that I do > not like windows, in fact I am happy with any soloution that works(TM). > But the problem is, the current soloution that is deployed in our > computer labs, doesn't work. I want to change it. > > Basicaly, here in computer labs, all the computers are white box PCs. > With a Pentium or Pentium II procesor, 64 MB of RAM, and a windows 98 > installed on top of it. Any user can come and sit, and then have total > complete access to the device. As you know windows 98 lacks any kind of > adminstration capabilities, so any use here has access to nearly do > anything with the computer. Every single student who enters the lab, > can do what ever he/she wants with the computer. Install software, > remove software, curropt the OS, delete the partition, ... The result > is quite imaginable. At any given moment, 30% of the computers do not > work, and the adminstarators have a really hard time, fixing these > computers, only to find that another one is now broken. > > It amazes me that untill now, no one had thought of a better soloution. > It seems everybody took this situation for granted, thinking that it's > the only way of doing things. Not me, I want to change this whole mess. > > Upgrading to Windows 2k or XP is of course a soloution, but a costly > one. All the hardware here should be upgraded, as well as buying new > license of the OS, which has a pretty steep price. while Microsoft > gives educational discounts, still the price is pretty high, and our > faculty doesn't have the resources to undergo such a drastic upgrade. > Besides, I don't want to just replace 98 with XP, I want a real change. > I want thin clients. > > > I have long believed in the thin client-server model. Considering that > nearly 99% of the students here in the lab use the computer for web > surfing, email, office, and other simple things. I don't underestand > why every user should have a dedicated processor, video card, Hard > Disk, and so on. Thin clients, should provide more than enough for web > surfing and emailing. > > Thin Clients are well, thin. They require less space, this means that > you can put more of them into the same space, than desktop PCs. > therefore, in a computer lab, with limites space, more people woule be > able to use computers. > > Thin clients use less power energy. Some research suggest that they use > nearly 1/6 of the electricity power that a normal desktop PC uses. > Electricity is expensive here, if we can cut our electricity cunsomtion > by 1/6, it will be a major win. > > Thin clients are easy to adminster. You don't have to adminster every > single box. Just maintain the central server, and everything would be > fine. This means a great reduce in the adminstration job. > > Besides, they are cheap. Cheaper than desktop PCs. The client doesn't > need a video card (do they?), hard disk, etc. > > Users will only have access to what they should have. So they can't > break a system. > > And also many many other advantages which you guys all know. Besides, I > also eye the opportunity of suddenly representing Linux desktop to the > faculty and replace windows desktops with linux, who knows, they might > like it and stay to it (Linux is free you know :-), they like it > here!! ). > > I have talked with the head of the faculty, and he told me to prototype > my ideas, and write a draft on thin clients, and explain in exactly > what I want to do, and how I plan to acomplish it. If I can convince > him, he is willing to give me a budget, and let me lead a pilot project > in using thin clients in computer labs. The problem is, when it comes > to implementation, my knowledge is near zero. > > How should I build thin clients? Can I build them myself just like white > box PCs? Or should I buy them from manufacturers? (the second case > would add to costs). > > What kind of a server do I need? Let's say for servicing 20 clients, how > much powerful should my server be? Do you think a 4-way CPU, with say 4 > GB RAM (mazimum available on the x86 architecture) is enough for a 20 > client lab? What about disk storage? > > What software should I run, should I run the X server, host the server > on the lab server, and then connect the cleints to it? Or are there any > other special soloutions for using thin client (X sounds fine for *nix > systems, but if we are to use windows, then what?) Note that I don't > want to pay for software, so I would rather have a full open source > soloution. > > Or any other white papers, advice, practical experience, or anything > regarding thin clients is greatly welcomed. Also if you point me to a > white paper, analysis, or a research which explains the advantages of > thin client, it would help me in writing my prototype. > > Arash Bijanzadeh and Zeini: I know that Cahapar Shabdiz works closely > with that swedish company which is experienced in thin clients. I would > love it if you guys give me any information that you have, regarding > thin clients. Have you ever deploied it yourself in a > production-level ? What about performance and speed? is performance > reliable enough to perform these simple day-to-day tasks? Also if you > give me information about that swedish company, I might use it. Who > knows, we might end-up buying our systems from them. If they have good > prices, and give educational discounts :-) > > Basicaly, any information regarding thin clients is welcome. Let me save > my university!!! > > Cheers > > -- > /* "Every gun that is made, every warship launched, > every rocket fired, signifies in the final sense a > theft from those who hunger and are not fed, those > who are cold and are not clothed."*/ > --President Eisenhower > > Aryan Ameri > > > > _______________________________________________ > bna-linuxiran mailing list > [EMAIL PROTECTED] > http://mail.nongnu.org/mailman/listinfo/bna-linuxiran > > _______________________________________________ bna-linuxiran mailing list [EMAIL PROTECTED] http://mail.nongnu.org/mailman/listinfo/bna-linuxiran
