Key Technologies Help Sys Admins

Client/server computing has evolved from concept to reality at a surprising number of companies, mainly because of the convergence of three key technologies.

LAN servers, database-management systems (DBMSs) and high-speed networks — the client/server building blocks — have matured enough in the past year to give companies the confidence they need to erect new computing architectures, according to industry analysts.

These technologies have converged so quickly that even some analysts have been surprised by the client/server wave. In a recent study, Cambridge, Mass., market researcher Forrester Research Inc. found that 69 percent of Fortune 1000 companies now use server databases, compared with only 26 percent a year ago (see chart and related story, below).

It’s as if corporations are rushing to embrace the technologies that will at last free them from costly, proprietary computing structures. In the client/server world, the only proprietary notion is that of rightsizing — all of the enabling technologies fitting together in a scalable, interoperable platform that allows each company to manage growth at its own pace.

Before jumping headlong into the client/server market, however, buyers should note the continuing evolution of enabling technologies. LAN servers are relatively stable products, but database-management systems and high-speed networking technologies are expected to change dramatically in the coming year.

LAN servers

After years of marginal growth, LAN servers have become “a lucrative market,” said Susan Frankle, senior analyst at market researcher International Data Corp. in Framingham, Mass. LAN server sales will almost double in the next few years, from 700,000 in 1992 to 1.2 million in 1996, she said.

A couple of forces are driving this trend, according to Frankle. First, companies are putting more PCs on networks and may need additional servers to increase performance. Organizations are also putting more network applications on-line. Rather than installing E-mail, fax and database applications on a single server, companies are installing dedicated servers for these types of applications.

Heady competition reigns within the LAN server market, which comprises X86- and RISC-based machines. Makers of multiprocessing super servers (Tricord Systems Inc., Parallan Computer Inc. and NetFrame Systems Inc., for example) are battling manufacturers such as Dell Computer Corp. and Compaq Computer Corp., which are trying to boost single- or dual-processor machines with network-management software, redundant drive arrays and overall system fault tolerance.

Compaq and Dell are doing more than putting PCs on steroids, Frankle said. “They’re trying to create bulletproof machines. As the machine becomes more powerful, it serves critical applications. You can’t have it go down on you,” she added.


As the Forrester report indicates, 1992 was the year that database-management systems caught on. However, most client/server database applications are still in the pilot stage, causing some reluctance among managers of information systems to rely on the new paradigm for mission-critical applications.

Part of the problem is the lack of administration tools available for client/server database systems (see related story, Page 92). While Oracle Corp., Sybase Inc. and other DBMS vendors work on adding tools similar to those available on mainframes, many corporate customers are relying on a two-pronged strategy of running mission-critical applications on mainframes and decision-support applications on client/server databases.

In addition to the lack of administration tools, database vendors have yet to perfect true distributed-database computing, in which databases on a LAN or WAN are constantly updated to reflect the latest changes. For now, administrators are skirting the problem by adding redundancy to LAN servers in the form of Redundant Arrays of Inexpensive Disks devices or uninterruptible power supplies.

Most analysts say the tools now lacking in client/server database systems will be added in the next couple of years, and the Holy Grail of distributed databases will be uncovered by 1995.

High-speed networks

Systems administrators in the throes of rightsizing must constantly be aware of technology beyond the horizon. Nowhere is this watch more promising — and confusing — than in high-speed networking.

Clearly, the momentum in the market is toward 100M-bps networking. The question is, how can corporations tap the power of emerging standards such as Fiber Distributed Data Interface (FDDI) without sacrificing their existing installations? One answer is CDDI, for Copper Distributed Data Interface — an alternative to fiber-optic networking that allows organizations to keep the copper wire they have installed in their buildings.

Even more promising is Fast Ethernet — a proposal for increasing the speeds of Ethernet networks from the current 10M bps to 100M bps. At the moment, however, Fast Ethernet proponents are split into two camps. One calls for extending Ethernet so that corporations can switch between 10M-bps and 100M-bps speeds, and the other favors replacement of a basic Ethernet layer to achieve the higher speed.

Many of the issues surrounding 100M-bps networking should be worked out within the year. Meanwhile, the existing state of the art — 10M-bps Ethernet and 16M-bps Token-Ring networking — is more than adequate for pilot client/server applications.

Get Smart With Your Client/Server Apps

clsvrEverybody knows how to build micro-based database applications, right? Just install the database program on the PC, spend a few minutes designing the database and then start writing the application.

Not in the client/server world.

With applications that involve both client and server software, quite a bit of behind-the-scenes work is required before developers can write a line of code. Unless they recognize the need for this work and schedule time for it, companies will find client/server projects inexplicably slow at first.

The client/server world involves three distinct entities: the server machine, the client machines and the network that connects the two. The server and clients must be able to communicate over the network before they can run any useful software.

Basic network connectivity

The first step is to establish physical connectivity between the clients and the server. Like the entire prep process, this is best done on a few pilot client systems. Once those systems are up and working, administrators can copy the working setups to actual client machines while minimizing the disruption for people using those machines.

To make sure the clients and the server are on the same network segment, install the appropriate network adapter in each client machine and connect that machine to the network with the appropriate type of cable. This task is best left to the network administrator. In some organizations, the clients and server machine will already be running on the appropriate network.

The next step is to establish connectivity between the basic network-protocol stacks running on the client and server machines. On the client side, this usually involves loading the network-adapter drivers for the appropriate network protocols. (Those would be SPX/IPX for a NetWare network, TCP/IP for a Unix network and so on.) Having some network protocols running on the clients may not be enough, because the database server may require a different network-protocol stack. Some database servers, for exampl e, are available on Unix systems but not as NetWare Loadable Modules; such servers often require TCP/IP protocol stacks on the clients.

Consider, for example, an organization that wants to plug a Unix-based database server into an existing NetWare network. The client PCs are running SPX/IPX, but the database server needs TCP/IP. Simply changing the clients to use only TCP/IP is probably not an option, because the clients need SPX/IPX for their existing file and print services.

The answer is to install on the client PCs a mechanism that lets the SPX/IPX and TCP/IP stacks coexist. One of the best ways to do this is to install drivers that obey an interface standard designed to work with multiple protocol stacks. Two such standards are Microsoft Corp.’s Network Driver Interface Specification (NDIS) and Novell Inc.’s Open Data-Link Interface (ODI).

Both NDIS and ODI drivers work the same basic way. Each loads a low-level driver for the adapter, then loads a piece of protocol-independent software. With this software, the client PC can run multiple protocol stacks, each of which talks to the same single protocol-independent layer.

The main drawback of this approach is RAM consumption. Administrators must load as much of this software as possible into high memory, or client PCs running DOS or Windows may run up against the conventional 640K-byte limit.

This approach can also be quite expensive, frequently in ways that are not initially visible. Switching to ODI drivers, for example, will work only if all the network adapters in the client PCs have network adapters for which ODI drivers are available.

Basic server connectivity

With the proper network protocols installed and the client systems able to connect to the server, the next step is to link the front-end program and the server-database software.

Most database-server packages include their own network drivers for this purpose. Those drivers, which are installed on the client machines, format network data in such a way that both the database server and its compatible client front ends can handle it.

Oracle Corp.’s Oracle database server, for example, comes with a driver called SQL*Net, which provides a layer of abstraction between all Oracle client front ends and the underlying network protocols. With NetWare networks, for example, SQL*Net uses Oracle’s SPX to link clients to an Oracle server.

These drivers often use one or more configuration files to control their operation, and each of those configuration files must be set up correctly. The drivers can easily consume 50K bytes or more of RAM, so they can aggravate the RAM-cram problem.

The result of this effort should be the client front ends successfully talking to the database server.

At this point on PC clients, however, RAM can be so tight that the front ends don’t have enough RAM to run. If that happens, it’s time to pause and go through all the usual steps to regain RAM: Consider using a memory manager, load high as much software as possible and so on.

Putting all the pieces together

When the RAM problem is under control, the next step is to set up user accounts and permissions in the database server.

First, log in to the database server. Some of the preceding steps may require the special privileges of an administrator account, but don’t be tempted to keep using that account while setting up the front-end packages. Use an account with standard user privileges to test the links between the front ends and the database server. Unless the eventual users of the client/server application will have special privileges, developers should not use privileged accounts.

Some databases even come with a built-in user account for this kind of testing. Oracle, for example, includes a user ID SCOTT with a password of TIGER for running tutorials and verifying that everything works.

After the log-on process, all that remains is to verify that a standard user account can indeed read and write to the database. The easiest way to conduct this verification is to use one of the sample database tables that most database-server vendors include with their products. When the front ends can manipulate one or more of these sample tables, all the pieces are in place.



In many organizations, the client systems already will be connected to the same network as the database server system. If they are not, install the appropriate LAN adapter and cables on the client machine.


Install the network protocol stack (for example, SPX/IPX, TCP/IP or NetBEUI) on the client that is needed to communicate with the database server. Again, in some organizations those protocols already may be running on the clients.


Ensure network coexistence. This is necessary when the clients are running a network protocol, but not the one needed to talk to the database server. A network protocol coexistence standard –such as NDIS or ODI — will enable the client’s network adapter simultaneously to run both its current protocol stack and the protocol stack required by the database server.


Install the software that communicates between the database front-end applications and the database server’s protocol stack. For Oracle servers on a NetWare network, for example, the SQL*Net drivers for SPX/IPX are necessary.


Check for sufficient RAM. On PC clients, adding all these pieces of software — network protocol, network coexistence and database connectivity programs — can cause RAM shortages. To have enough RAM to run the database front-end application, the installer may need to regain RAM by taking such steps as loading software high and using a memory manager.


Establish a database user account. Use a standard user account, not a privileged administrator account, for all testing. Either use an available default user account that the database server includes or create a user account for development and testing.


Log in to the database with the selected user account. Verify that the account and the other developer accounts can work with sample database tables.


Get to the “real” work. Start designing the database, laying out the prototype screens and building the client/server application.

Middleware Makes Good On Its Promises

mdwImagine seamlessly accessing data from IBM AS/400, HP 9000 and PC servers, regardless of the type of client platform. Better still, imagine developing an application that runs on multiple clients and integrates with these servers without prior knowledge of what those platforms are.

Creating such a truly distributed computing environment is the goal of a growing class of software called “middleware,” which provides a common interface for diverse front ends and servers. With so many approaches to client/server computing and a diverse installed base, the demand for middleware is increasing. By definition, client/server applications demand the seamless integration of software across computer platforms, operating systems and networks.

One new middleware product slated to ship next month is Oracle Corp.’s Oracle Glue for Windows, which provides access from Microsoft Corp.’s Visual Basic and Excel, Dynamic Data Exchange-enabled applications and programs that support dynamic link libraries to Oracle and IBM DB2 servers, Borland International Inc.’s dBASE and Paradox files, and Sharp Electronics Corp.’s Wizard 7000 and 8000 series electronic organizers.

“There’s an overwhelming need for middleware,” said Tucker McDonagh, managing director of Tucker Network Technologies Inc. in South Norwalk, Conn. Standards emerging in the market, such as the Vendor-Independent Messaging API (application programming interface), will “relieve some of the complexity [of disparate environments], but middleware will always be a requirement,” he said.

Middleware provides an API for exchanging messages between pieces of distributed applications, McDonagh said. Some programs also can work with multiple protocol stacks, so separate applications can interoperate across unique LANs and WANs.

McDonagh divides middleware into five groups: messaging products with an API and communications services for distributed applications; remote procedure calls, with an API that works over multiple protocol stacks; database/data-access packages, which link to various databases; on-line transaction-processing programs, which add error-control, security and recovery aspects; and traditional middleware, or file-transfer software.

Developers of client/server software are providing links to multiple clients and back ends. Trifox Inc.’s Vortex, for example, includes software that allows users to access a variety of database servers, such as Ingres, Sybase, Informix, Oracle and Ultrix/SQL.

Oracle Corp. also offers a software suite that eases accessing multiple clients and back ends. With Oracle Card, a front-end development tool, both PCs and Apple Computer Inc. PowerBooks can access an Oracle database, according to Frank Naeymi-Rad, MIS director for the University of Health Science at the Chicago Medical School.

“Oracle Card provides us with one platform for running both PCs and Macintosh computers,” he said. The application allows medical students and doctors at Cook County Hospital to use PowerBook notebook computers to enter data while at a patient’s bedside and transfer it over a network that is also accessed by DOS PCs.

In other cases, middleware serves as the catalyst for creating a distributed network. The state of Alabama, for example, is developing a system that uses Information Builders Inc.’s (IBI’s) EDA/SQL to link OS/2-based clients at 43 prospective counties with an IBM DB2 mainframe server.

“We run multiuser EDA/SQL at the gateway to ship requests to the host for updates to the DB2 server,” said David Murrell, information-systems manager for the state’s advanced technology group in Montgomery. “We really use EDA/SQL to accomplish a distributed-database environment.”

EDA/SQL stands out among database tools because it can be used with more than 100 front-end tools to access more than 50 types of databases on 35 different platforms.

While most middleware currently supports specific protocol stacks and applications, industry giants intend to establish all-encompassing standards. Digital Equipment Corp. (DEC), for example, engineered Network Application Support (NAS), an architecture and products designed to provide the ultimate level of open systems.

DEC has more than 90 NAS products for nine hardware platforms. Examples include its Compound Document Architecture for documents with graphics and text and Application Control Architecture Services, which provide object-oriented tools for communicating between applications.

Third-party developers have also embraced the NAS architecture. More than 3,000 NAS applications are available from such firms as Computer Associates International Inc., IBI and Microsoft.

Digital Communications Associates Inc. (DCA) also plans to unveil its own platform this spring. The firm recently announced a universal communications architecture to provide consistent access, features and APIs. DCA’s first product based on this architecture will be QuickApp, an application-development tool that shields programmers from communications-transport protocols, including LU 2, LU 6.2 and LU 0.

Users of tools such as Visual Basic and Mozart Systems Corp.’s Application Renovation development software will be able to create end-user applications without knowledge of communications software, according to DCA officials in Alpharetta, Ga.