This movement toward centralized systems provides a potential killer application for Linux. As the only operating system which provides an excellent solution for both high-volume servers and thin clients, Linux has the potential to make inroads in the business and educational markets where centralized systems are increasingly common.
First, some definitions. A centralized system is generally a computer system in which a single server provides most or all of the disk space, memory, and processor time that the users use. A client-server system, on the other hand, primarily uses the computing power of the users' machines, possibly storing data in a central location. Of course, there is no firm boundary; the clients of today are far more powerful than the servers of thirty years ago.
Several vendors have attempted revivals of the central server phenomenon. The most famous is Sun's JavaStation, a widely advertised device which could download and run custom, small applications written in Java. Almost immediately, Microsoft retaliated with its NetPC standard, which mostly required some minor changes to the design of commodity PCs to make them more similar to terminals. Both of these attempts failed. The JavaStation required total rewrites of every application in Java, and didn't work well with the Web, which was just becoming a business tool at the time. The NetPC simply wasn't appreciably different from more widely available PCs.
Both the JavaStation and the NetPC downloaded each application from the server each time it was used. When the installation of 100 Mbps networks increased the speed of client-server communication by an order of magnitude, Sun and Microsoft realized that terminals could now transfer images of the users' desktop in real time over the network. This allowed administrators to install applications, unmodified, on the server alone, and not have to worry about network communication among the clients. Microsoft bought Citrix and developed inexpensive terminals that could use a single server in parallel, while Sun created its Sun Ray terminals which transfer graphics from a Sun server.
The idea of transferring graphics commands over the network had been tried once before -- with X terminals. X terminals receive graphics commands over the network, and can even connect to multiple servers at once, but the X protocol is uncompressed, and can't keep up with today's graphical applications.
Linux, as a multiple-user system, fits into a centralized infrastructure naturally. Nearly every Linux distribution comes with XFree, a very good implementation of an X server, so setting up a system to run programs from a server requires only a few changes to the X configuration. On the server side, a few lines added to the XDM configuration allow terminals to connect over the network.
However, X is not a satisfactory solution. Running a modern desktop environment like KDE will bog down a fast network with only a few users. Though X can be compressed using a local proxy, setting such a system up can be a challenge. X servers are very large, up to 80MB, which doesn't seems appropriate for a supposedly thin client. And, for those concerned with security, X sessions are quite vulnerable to simple sniffing.
There are other ways to use Linux in a centralized system. The VNC system from AT&T Research allows display of X programs using a very small client. The client can run on nearly any OS, which allows easy integration with existing Windows systems; to run a program on a Linux server from a Windows machine, one could simply open the VNC client and turn the PC into a terminal. VNC is a compressed protocol, and the required bandwidth decreases with every release. Disadvantages of VNC are that it is even less secure than X -- a program exists which can pull VNC sessions off the network and "replay" them pixel-for-pixel -- and that a framebuffer is needed for each client connection on the server, which can increase memory requirements considerably.
At some point, the choice of which terminal system to use depends on the cost. The cost of maintenance, the cost of hardware, the cost of setting up the system, and, for the Microsoft and Sun systems, the cost of software must all be factored into the total cost of purchasing a centralized computing system.
My school recently paid $800 a seat for a cluster of Compaq iPaqs running Windows NT. This is a good example of where centralized systems could save thousands of dollars. A lab full of fifty of these systems would cost $40,000, a useful baseline price to compare the terminal-based systems with. (My school actually paid $65,000 for thirty machines, because they bought very expensive monitors for each station and decided to purchase a $15,000 server to store a few hundredMB of student files. For the sake of argument, we're assuming there are monitors to spare and that the individuals designing the lab don't have $30 million in taxpayers' money they need to get rid of.)
The computers in this lab need to:
Our hypothetical lab is owned by an educational institution such as a large high school or a small college, so educational discounts are included below. They usually don't make very much difference in the total prices; in fact, the cheapest solution is made by a company with no educational discount at all.
There are many ways to construct such a lab in a centralized manner. Here are the prices for a few, not including monitors and incidentals:
All the terminals in the world are useless without the right software. Most computers in business and education need to do four things: browse the Web, access email, run productivity applications, and access any custom databases that might exist. Browsing the Web and checking email are situations any OS can handle. On a server OS, it's possible to give each user in the lab an email account and a proxy server to speed up Web browsing. Productivity applications do exist on Linux (I'm writing this using WordPerfect over a VNC link, and KDE's Kword is making great strides), but this is one area where Windows machines are still far superior. (For the Sun Ray, the Open Source word processors and StarOffice are available, but commercial office suites like Corel's are not). Custom databases, of course, will need to be rewritten to work under a different OS, but usually both Windows and Unix versions already exist in some form. Many companies run corporate databases through an X client that connects to the user's PC, so a Linux system will fit in naturally.
Usability certainly varies between the different terminal systems. Windows Terminal Server has very good compression -- after all, Microsoft can fine-tune it for their graphics API -- and the user has access to most Microsoft applications. VNC isn't as fast, but the next version is rumored to incorporate new algorithms to speed it up.
Running the applications directly on the NIC is an attractive possibility because it is so cheap, but the overhead involved in creating a custom boot CD, burning 50 copies of it, and creating the distribution on the server may be worth the $10,000 expense of using a more server-side solution. The Windows solution is, typically, extremely simple to set up, though handling the creation and management of 1,000 accounts is certainly easier on Linux. VNC is not trivial to set up -- it often involves patching the X server, setting up XDM, and reconfiguring inetd -- but an experienced sysadmin should be able to do it in a day or two.
For some purposes, using terminals is a major advantage. Updating applications is simple, and with VNC or a Sun Ray, users can save their desktops and use them at a different terminal. Sharing files and checking email inboxes suddenly becomes simple when everyone uses the same server. With VNC, a teacher or supervisor could monitor a user and take control of a session; X allows users to run programs on each other's displays.
Maintenance on centralized systems is mostly a matter of updating the server. The Linux, Windows, and Solaris servers should really only be modified by experts in those systems; after all, a small error could make 50 machines stop working. That's why a Windows shop shouldn't consider a Linux-based network,or vice-versa, unless they are willing to hire new system operators and undergo training.
All the systems examined would save our hypothetical lab at least several thousand dollars. Though the Linux-based solutions are not perfect, they are somewhat cheaper than the Microsoft and Sun systems, and would probably run acceptably in a commercial environment. It is also worth noting that none of these solutions is complete -- in the real world, each station would need a monitor, hubs and switches would need to be purchased, and a backup system would need to be put in place.
Are centralized systems really worth the effort it takes to choose one and configure it? No one would recommend a centralized system for a home network with three computers, for a team of programmers who constantly push the limits of the operating system, for a LAN party of Quake players, or for running an air traffic control system, but for applications where there are a large number of similar workstations which run similar, non-graphics-intensive programs and absolute reliability isn't necessary, centralized systems can greatly reduce cost and sometimes even increase functionality. It's time for terminals to have their place in the spotlight again.
Dan Feldman is a high school student in Seattle. He likes to write programs in Python, find recipes for dirt-cheap computers, and experiment with cool Open Source stuff. He can be reached at firstname.lastname@example.org.