Client/server computing has evolved from concept to reality at a surprising number of companies, mainly because of the convergence of three key technologies.
LAN servers, database-management systems (DBMSs) and high-speed networks — the client/server building blocks — have matured enough in the past year to give companies the confidence they need to erect new computing architectures, according to industry analysts.
These technologies have converged so quickly that even some analysts have been surprised by the client/server wave. In a recent study, Cambridge, Mass., market researcher Forrester Research Inc. found that 69 percent of Fortune 1000 companies now use server databases, compared with only 26 percent a year ago (see chart and related story, below).
It’s as if corporations are rushing to embrace the technologies that will at last free them from costly, proprietary computing structures. In the client/server world, the only proprietary notion is that of rightsizing — all of the enabling technologies fitting together in a scalable, interoperable platform that allows each company to manage growth at its own pace.
Before jumping headlong into the client/server market, however, buyers should note the continuing evolution of enabling technologies. LAN servers are relatively stable products, but database-management systems and high-speed networking technologies are expected to change dramatically in the coming year.
After years of marginal growth, LAN servers have become “a lucrative market,” said Susan Frankle, senior analyst at market researcher International Data Corp. in Framingham, Mass. LAN server sales will almost double in the next few years, from 700,000 in 1992 to 1.2 million in 1996, she said.
A couple of forces are driving this trend, according to Frankle. First, companies are putting more PCs on networks and may need additional servers to increase performance. Organizations are also putting more network applications on-line. Rather than installing E-mail, fax and database applications on a single server, companies are installing dedicated servers for these types of applications.
Heady competition reigns within the LAN server market, which comprises X86- and RISC-based machines. Makers of multiprocessing super servers (Tricord Systems Inc., Parallan Computer Inc. and NetFrame Systems Inc., for example) are battling manufacturers such as Dell Computer Corp. and Compaq Computer Corp., which are trying to boost single- or dual-processor machines with network-management software, redundant drive arrays and overall system fault tolerance.
Compaq and Dell are doing more than putting PCs on steroids, Frankle said. “They’re trying to create bulletproof machines. As the machine becomes more powerful, it serves critical applications. You can’t have it go down on you,” she added.
As the Forrester report indicates, 1992 was the year that database-management systems caught on. However, most client/server database applications are still in the pilot stage, causing some reluctance among managers of information systems to rely on the new paradigm for mission-critical applications.
Part of the problem is the lack of administration tools available for client/server database systems (see related story, Page 92). While Oracle Corp., Sybase Inc. and other DBMS vendors work on adding tools similar to those available on mainframes, many corporate customers are relying on a two-pronged strategy of running mission-critical applications on mainframes and decision-support applications on client/server databases.
In addition to the lack of administration tools, database vendors have yet to perfect true distributed-database computing, in which databases on a LAN or WAN are constantly updated to reflect the latest changes. For now, administrators are skirting the problem by adding redundancy to LAN servers in the form of Redundant Arrays of Inexpensive Disks devices or uninterruptible power supplies.
Most analysts say the tools now lacking in client/server database systems will be added in the next couple of years, and the Holy Grail of distributed databases will be uncovered by 1995.
Systems administrators in the throes of rightsizing must constantly be aware of technology beyond the horizon. Nowhere is this watch more promising — and confusing — than in high-speed networking.
Clearly, the momentum in the market is toward 100M-bps networking. The question is, how can corporations tap the power of emerging standards such as Fiber Distributed Data Interface (FDDI) without sacrificing their existing installations? One answer is CDDI, for Copper Distributed Data Interface — an alternative to fiber-optic networking that allows organizations to keep the copper wire they have installed in their buildings.
Even more promising is Fast Ethernet — a proposal for increasing the speeds of Ethernet networks from the current 10M bps to 100M bps. At the moment, however, Fast Ethernet proponents are split into two camps. One calls for extending Ethernet so that corporations can switch between 10M-bps and 100M-bps speeds, and the other favors replacement of a basic Ethernet layer to achieve the higher speed.
Many of the issues surrounding 100M-bps networking should be worked out within the year. Meanwhile, the existing state of the art — 10M-bps Ethernet and 16M-bps Token-Ring networking — is more than adequate for pilot client/server applications.