File Servers With Performance Monitors Last Longer

flsvrMost people have a tendency to avoid planning for rough times. The thinking seems to be that if you don’t plan for it, maybe it won’t happen.

With servers, this kind of thinking will always lead to trouble because of computing rule No. 1: All computers eventually will seem slow. Servers are no exception. The server that today has more CPU power and disk speed than you can use will be as slow as an old dog tomorrow.

The good news is that with the right tools, you can prolong any server’s life. The first step, of course, is to find out just what the server is doing and exactly where it is too slow.

For that, you’ll need a good performance monitor. In fact, you’ll probably need a whole set of performance monitors, because in the client/server world, the “server” most users see is actually a combination of hardware and both operating system and application software.

The bad news is that we need a much broader and more powerful set of performance monitors than are currently available. What’s out there today only scratches the surface of the total information picture we’d like to see.

The lowest levels of a server, the baseline hardware performance, are in many ways the easiest to check. The main data you need is fairly straightforward: CPU, RAM and disk consumption, network-interface card activity level, and so on.

Not as easy as it seems

Even that data, however, can be tough to get in a complex server loaded with multiple processors, multiple disk controllers and lots of disk drives. In such a system, for example, it’s not enough to know that your disk subsystem is the bottleneck. You also need to know which controllers and/or drives are the problem.

Stepping up a level, you also need to know just how well the operating system is performing. Relevant information ranges from simple things, such as the number of users currently active, to more complex data, such as the percentage of the disk buffers that stay dirty over time.

While in the world of PC-based servers neither of these types of performance-monitoring technologies is where we’d like it to be, vendors at least seem to be working on the issues.

The big hole in PC-based server-performance monitoring is in the server applications. When your E-mail server is slow, what’s happening? Just what is that database server doing, anyway? Worse, what interaction between those two is causing both to crawl?

Server applications need their own performance monitors, which should allow high-level overviews right down to the hardware. You should be able, for example, to start with a generic problem: slow query response time. With the database server’s performance monitor, you discover that 90 percent of your queries are going against one table.

Look deeper. Three-quarters of that 90 percent are using the same index. Look even deeper. That index and the table into which the index leads the queries are on the same (slow) disk drive.

Move the index and the table to separate, faster drives, and your performance will improve.

To make even such relatively simple performance examinations possible, the server application monitors must know how to work with the underlying operating system and hardware monitors. Each level of performance monitor should provide a consistent interface to all the higher levels that want to use it. Without such interfaces, you’ll end up dashing from monitor to monitor, trying to relate the problems displayed on one screen to the bottlenecks shown on another.

The basic technology to provide this level of integration isn’t very hard, but it does require that a lot of different vendors work together and that standards emerge for all the key platforms. Those vendors that put in the effort to create such standards and tools that use them eventually will be rewarded with sales for their willingness to plan for the inevitable rough times.

IBM Redefined Mainframes In Order To Survive

ibmrmBM is bravely trying to give the mainframe a new lease on life as a “server of servers” in client/server environments.

To woo skeptical customers, IBM is planning less expensive versions of its 390 mainframe hardware and more open versions of its mainframe operating systems, the better to compete with the PC LANs and Unix super servers that have stolen market share from mainframes.

“[The mainframe] has the I/O, the memory, the bandwidth and storage to be a server of servers,” Nicholas Donofrio, general manager of IBM’s Enterprise Systems unit in Somers, N.Y., said at last month’s announcement of 18 new ES/9000 models.

The company’s strategy is three-tiered. First, IBM will continue to enhance its current crop of mainframes. The new models announced in February will appeal mostly to current customers running legacy applications.

Second, the company will develop parallel-processing mainframes built around air-cooled microprocessor versions of its current 390 processors. These models are designed to match the price/performance of PC LANs and other smaller platforms by the middle of the decade.

The first new parallel-based systems won’t appear until later this year, in the form of a back end to existing mainframes that will speed queries to IBM’s DB2 mainframe database, said Donofrio.

These systems will be stepping-stones to fully parallel systems that will appear in the second half of the decade. Those systems, Donofrio said, will be cost-competitive with smaller platforms equipped to deliver the same levels and types of computing power.

On a third front, Enterprise Systems has formed an internal joint business with IBM’s Advanced Workstations and Systems division to develop mainframes based on the RISC chips used in IBM’s RS/6000 workstations running Unix.

These systems will bring the performance of RISC hardware and the openness of the Unix operating system to mainframe customers. But Donofrio downplayed the idea that the current 390 architecture would be replaced by RISC/Unix systems anytime soon.

“It’s very unlikely to become the dominant player in this century,” he said. “Our MVS open client/server model will be the winner as we turn the century, [although] in a more parallel form.”

IBM’s ace in the hole for keeping the current mainframe architecture alive is the hundreds of billions of dollars’ worth of software already written for it by major corporations. Until there’s a feasible way to port such legacy applications from the mainframe to smaller platforms, and until information-systems managers are convinced those platforms are safe, IBM officials and large customers believe the mainframe will continue to have a role.

“We are moving in the direction of client/server for our legacy applications, but … the mainframe is not going to disappear for two or three years or longer,” said Harry Waldron, management information support manager at the Atlantic Mutual Insurance Cos. in Roanoke, Va.

“We have some packaged systems which are currently mainframe-oriented, and it will take time for vendors to retrofit those to scalable architectures,” he said. “We still have the accounting and actuarial systems, which require massive amounts of data. Those may not be scalable to client/server for years to come.”

IBM’s challenge is to keep such customers in the mainframe fold while it brings the price of mainframe computing closer to that of PC LANs, minicomputers and Unix servers.

Enterprise Systems must accomplish this even as revenues shrink. Battered by price wars and defections to other platforms, IBM’s mainframe revenue fell 8 percent in 1992 to $12.7 billion, officials said. Donofrio expects a similar revenue drop this year.

It will probably be 1995 or 1996 before Enterprise Systems’ revenues stabilize, said Bill Wilson, assistant general manager of marketing for the unit.

However, falling revenue and profit margins forced Enterprise Systems to shed 3,000 of its 16,500 jobs in 1992, said a spokesman for the unit.

Backs open-systems route

IBM is also promising to open up its mainframe environment, making it easier for other applications and other vendors’ hardware to work with its mainframes.

As part of its open-systems push, IBM last month announced that a POSIX shell and POSIX utilities will be available for its MVS/ESA mainframe operating system, but not until March 1994.

That support is only a “small stepping-stone” in letting mainframe applications run on Unix systems, said Lewis Brentano, group vice president of systems and applications for InfoCorp, a Santa Clara, Calif., market researcher. “I think you’re still looking at eight to 10 weeks on the MIS side to move this stuff over [to Unix],” he said.

By improving the price/performance ratio of its current mainframe hardware and software, “we’re going to give the customer every reason to stay on the 390 architecture and not spend the money to convert and migrate [their applications],” said Wilson.

But at the same time, IBM is porting some of its key mainframe software to smaller platforms for customers who choose them. One example is its CICS (Customer Information Control System) transaction-processing software, which is being ported to OS/2 and AIX, IBM’s version of Unix that runs on the RS/6000 workstation.

But such moves might only speed the move of applications from the mainframe to those smaller platforms, analysts said.

“They haven’t solved any of the inherent mainframe price/performance problems that have been driving users in increasing droves toward PC and midrange systems,” said Peter Kastner, a vice president at Aberdeen Group, a Boston market-research firm.

“The mainframe can be a large-scale data warehouse,” he said, “but IBM has to deliver the software … to make this happen, and it has to make it competitive with other building-block client/server solutions.”

SGI’s NT Focus Hurt Them Long Term

sgicSilicon Graphics Inc. is hoping to ride on the coattails of Windows NT to win its MIPS RISC architecture a prominent spot on the PC landscape.

According to President and CEO Edward McCracken, SGI’s strategy will be to encourage PC makers to adopt MIPS technology to distinguish their NT systems from those based on Intel Corp. processors. The proposed payoff: As NT’s market share grows, the stake in MIPS systems will increase, as will SGI’s influence in the systems business, SGI officials said.

“If NT moves beyond a niche operating system, I believe the MIPS architecture will become the most pervasive RISC architecture in the world and may over the long term — in five or 10 years — move in the position of competing in volume with Intel,” McCracken said. SGI purchased MIPS, now called MIPS Technologies Inc. of Mountain View, Calif., last June, and annual revenues for the combined firm near $1 billion.

Hopes to triple sales

McCracken’s goal is for annual sales of MIPS’ RISC chips to grow from the current 300,000 to 1 million units by 1995. Although many of those chips will be sold in embedded systems, McCracken said he believes the bulk of the growth will come as a result of MIPS’ link with NT, due out this spring.

Analysts, however, are skeptical that SGI’s plan can achieve these lofty ambitions. Microsoft Corp. has pledged to make NT run on MIPS’ 64-bit microprocessors, which will allow software vendors to recompile Intel-based NT applications for the MIPS platform. However, SGI will have trouble convincing both hardware and software vendors to expend the effort, analysts said.

“It’s the classic chicken-and-egg situation,” said Jonathan Yarmis, vice president and service director for Gartner Group Inc., a market-research firm in Stamford, Conn. Hardware providers will want to wait until software vendors make a commitment to the MIPS platform, while software suppliers will opt to sit tight until the hardware vendors commit, he said. In contrast, Yarmis said Intel “doesn’t need to convince someone to be first.”

“Intel is a safe bet,” agreed Dave Becker, manager of systems product marketing for Wyse Technology, a San Jose, Calif., company building multiprocessing systems that will run NT.

More important than NT support, SGI needs backing from powerful allies in order for the MIPS chip to gain steam, said Ken Lowe, an analyst at Dataquest Inc., a San Jose, Calif., market-research firm. Lowe predicts that only 780,000 MIPS chips will ship in 1996, compared with 5 million PowerPC processors, which have the backing of IBM, Apple Computer Inc. and Motorola Inc. That same year, Intel will ship 45 million 386 and higher-class microprocessors, he projected.

MIPS is doing what it can to convince PC makers to come on board. Last month, with Microsoft’s show of support, MIPS officials announced a resource center, which will sell design kits to hardware vendors interested in the MIPS technology. (See story, Page 23.)

That tactic harkens back to a strategy laid out in the Advanced Computing Environment (ACE) initiative, a consortium formed in 1991 to develop standard workstations based on Intel and MIPS chips. When the initiative crumbled last year, in part under the weight of infighting, MIPS lost an important channel to PC makers.

“The creation of MIPS-based NT systems fulfills the promise of ACE,” said Carl Stork, director of systems marketing at Microsoft, of Redmond, Wash. “Our commitment and confidence in the MIPS architecture remains unflagging.” Stork, however, acknowledged that Microsoft is willing to work with any microprocessor maker interested in NT.

Unlike Digital Equipment Corp., another RISC competitor making similar claims for its Alpha processor, pricing on the MIPS chip should be tempered by the fact that MIPS has six licensees churning out processor chips; DEC is the sole producer of Alpha.

So far, only one PC maker — Acer America Inc. of San Jose, Calif. — has committed to build a MIPS-based NT machine. This is not nearly enough, analysts said, because SGI needs to draw others to keep microprocessor prices down and, more importantly, fund future versions of the MIPS chip, which lies at the heart of its own workstation and supercomputer line.

“I don’t think the MIPS architecture has much of a chance in the systems market outside of SGI unless they are able to capitalize on NT,” said Bob Herwick, an analyst with the San Francisco brokerage firm Hambrecht & Quist.

According to McCracken, SGI currently uses only 30,000 MIPS microprocessors per year for its workstations and servers, or about 10 percent of the MIPS chips sold in 1992. Most go into embedded systems like laser printers, and others go to such systems makers as Pyramid Technology Corp. and Sony Corp.

Distributed Databases Require Hefty Planning

dtddbDistributed-database technology, while far from mature, continues to evolve in parallel with client/server computing.

The goal of distributed computing is to allow users to transparently read and update multiple databases running on different platforms in multiple distant sites. This ideal has not been achieved, according to observers, because as the number of different databases and hardware devices to be accommodated increases, it’s harder to put together a stable and reliable system.

“You can get pretty good distributed-database performance for [data] reads; but when you are attempting to modify or write new data, generally most database products will not give 100 percent transparency,” said Paul Winsberg, a principal partner with Database Associates, a consulting firm in Berkeley, Calif.

This has not prevented a number of organizations from setting up distributed-database systems. But most of the successful distributed systems running today are heterogeneous, a single type of database communicating through a common set of network protocols.

All the major relational-database developers, such as IBM, Oracle Corp., Informix Software Inc., Gupta Corp., Ingres Corp. and Sybase Inc., are working on ways to improve the distributed-database capabilities of their systems.

Ingres, for example, has built a number of features into its Ingres/Star database to support distributed-database applications, including a distributed query optimizer, which, by analyzing CPU costs and network communication costs, allows database administrators to coordinate joins between tables in separate databases along the fastest, most cost-effective route, said Diana Parr, director of server marketing at the Alameda, Calif., firm.

The key to supporting distributed-database management and on-line transaction processing is the two-phase commit, according to David Knight, senior manager with Oracle Corp.’s Server Product Marketing group in Redwood Shores, Calif.

Under the two-phase-commit method, the database system shuts down a transaction that it cannot fully execute, he said. This ensures that the database won’t be corrupted if a server crashes in the middle of a transaction and allows database administrators to restore all on-line databases up to the moment the crash occurred, before adding any new transactions, he said.

Most relational databases that support on-line transaction processing, including Oracle Server, Informix and Ingres, use the two-phase-commit process in one form or another.

To provide read and write access to multiple vendors’ databases, developers usually turn to gateways and other “middleware,” which generally is provided at extra charge by these vendors or by third parties such as Micro Decisionware Inc. and Information Builders Inc. (IBI).

Relational databases compatible with ANSI-standard Structured Query Language (SQL) are working with a least common denominator of technology, according to Winsberg. This makes it difficult to update data across databases from multiple vendors, he said.

IBI markets Enterprise Data Access/SQL, a server-based query-processing system that allows users to retrieve data from more than 50 databases. However, EDA/SQL currently works mainly for read-only access to relational and nonrelational databases, Winsberg said.

IBM’s Distributed Relational Database Architecture “is a more robust solution because it does provide some update capability, but it is designed to work within a limited IBM environment,” he said. It gives PC users the ability to access data stored in DB2 mainframe databases, OS/2 servers or in AS/400 databases.

Micro Decisionware’s DB2 Gateway is also widely used by developers who want to link PC-based query applications to DB2 mainframes. But again, these applications are mainly query and report-generation tools for decision-support purposes.

Even with these gateways, distributed-database applications that link multiple sites can be slow and difficult to set up, according to Richard Finkelstein, president of Performance Computing Inc., a database consulting firm in Chicago.

“Joining multiple relational tables on a single machine can be a very slow process,” he said. “Joining multiple tables over a distributed network, even when you are working with fast lines, can be impossibly slow.”

However, major corporations are building distributed-database applications by setting moderate goals for themselves and by planning carefully.

ITT Hartford Insurance Co.’s Employee Benefits Division uses Oracle’s Parallel Server on a DEC VAXcluster to give 25 field offices around the United States access to customer records for filing and reviewing benefits claims, said Jim Bosco, project manager at the Hartford, Conn., firm.

Each field office has its own VAX, which can be used to enter and process claims, Bosco said. All of the field offices are connected to the VAXcluster in the home office through a DECnet WAN, he said.

“Most of the data that we use is entered and maintained in the home office. However, most of the field offices have local databases that are part of our enterprise data model,” Bosco explained.

A field-office worker can enter a query and call up the data from the local database; if it is not there, the system will look for the case file in the home-office VAXcluster. “The performance is transparent. The user doesn’t know where the data is coming from,” Bosco said. The field offices transmit updates to the home-office databases each night, he said.

This setup is successful because it’s a pure Oracle application running on a fairly fast server, according to Bosco. It would be more difficult to provide the same level of performance if the system were working with two different databases, he added.

Get The Right Tools For Smooth Apps

cntsvrThe client/server world is brand-new for companies planning to rightsize their database operations, and that shows in the slim availability of database-management tools to support such platforms.

Many of the tools available for mainframes to control and manage data don’t yet exist in client/server database systems. Tools for monitoring, measuring, testing, evaluating and simulating the performance of the database under different allocation formats “will either not be there or will be significantly less powerful than what’s on the mainframe,” said George Schussel, president of Digital Consulting Inc., an Andover, Mass., consulting firm.

Granted, most client/server database-management systems come with tools for defining a database, allocating space for it, defining tables and adding users. A database-management system can also be expected to provide some functions for maintaining the system, such as backing out of a transaction if there’s a failure. The mechanisms should be sophisticated enough to back up the database while on-line, according to Max Dolgicer, a director for Tucker Network Technologies Inc., a consultancy in South Norwalk, Conn.

Yet the client/server platform, arguably a more flexible and cost-effective computing architecture than the mainframe, is woefully lacking in areas of planning and administration.

“We’ve had to create our own security utilities to allow people to access different parts of the application,” said Bill Soper, manager of information services for Chevron Canada, an oil firm in Vancouver, B.C.

Throw hardware at the problem

How do corporate architects of client/server platforms get over such hurdles? By throwing hardware at the problem. “Because you do not have the tools to tell you what hardware to get, you should figure out what you think you’ll need, then triple it — get three times what you think you need for disk capacity — because it’s cheap,” said Schussel.

It’s not enough just to have a large hard disk — administrators should also ensure data redundancy by using mirroring technology to copy data from the disk to a second disk drive, or use a Redundant Arrays of Inexpensive Disks (RAID) system.

To extend the life of the system, make sure the database-management system selected can run on more powerful processors as they become available. “The key is to have a scalable back end so if you top out, you can move to a more powerful platform,” Soper said.

As with any type of server, the client/server back end should be protected by an uninterruptible power supply (UPS). UPS devices generally range in power from 500 kilowatt voltage amps (kva) to 1,500 kva, and they cost between $1,000 and $10,000.

And network-management software is a must. “Network-management tools are still not what everyone would like them to be, but you can piece together enough of a solution to get by,” said Maureen Rogers, director of marketing for Softbridge Inc., a developer of test software for custom client/server applications.

By adhering to such tenets, rightsizing pioneers like Richmond Savings Credit Union have managed to reengineer their business systems without putting their companies at risk, and without using fully redundant, high-priced systems.

Richmond Savings moved its banking system five years ago from a proprietary minicomputer to a 486-based client/server platform that is currently humming along at 100,000 transactions a day. Richmond is using a database-management system called Probe, developed by Prologic Computer Corp., of Vancouver, B.C., that has its own front end. Probe is a combination fourth-generation language, relational database-management system and proprietary network operating system. Prologic is currently helping Richmond crea te links between Probe and NetWare.

“[Prologic] made it so that using DOS, we could create very large databases and have it perform well,” said Allen Lacroix, vice president of technology at the credit union, in Richmond, B.C.

For its central server, Richmond administrators chose a 33MHz 486-based machine with 16M bytes of memory and 3.2G bytes of unformatted disk. Although the administrators looked at RAID disks, they felt they did not need them. “Our database is very reliable, and there are several recovery mechanisms available with it,” said Lacroix.

Another Prologic customer who transferred a credit-union banking system from a mainframe to a client/server platform went further in adding redundancy. The Pacific IBM Employees Federal Credit Union in San Jose, Calif., is using two identically configured IBM PS/2 Model 95s as a central server, said Daryl Tanner, president.

“There are two Model 95s sitting side by side, with one doing nighttime processing. In the morning, we copy one disk over to the other machine, so they both have the same data at the start of the day. If one crashes, we have the other,” he said.

Key Technologies Help Sys Admins

sadmClient/server computing has evolved from concept to reality at a surprising number of companies, mainly because of the convergence of three key technologies.

LAN servers, database-management systems (DBMSs) and high-speed networks — the client/server building blocks — have matured enough in the past year to give companies the confidence they need to erect new computing architectures, according to industry analysts.

These technologies have converged so quickly that even some analysts have been surprised by the client/server wave. In a recent study, Cambridge, Mass., market researcher Forrester Research Inc. found that 69 percent of Fortune 1000 companies now use server databases, compared with only 26 percent a year ago (see chart and related story, below).

It’s as if corporations are rushing to embrace the technologies that will at last free them from costly, proprietary computing structures. In the client/server world, the only proprietary notion is that of rightsizing — all of the enabling technologies fitting together in a scalable, interoperable platform that allows each company to manage growth at its own pace.

Before jumping headlong into the client/server market, however, buyers should note the continuing evolution of enabling technologies. LAN servers are relatively stable products, but database-management systems and high-speed networking technologies are expected to change dramatically in the coming year.

LAN servers

After years of marginal growth, LAN servers have become “a lucrative market,” said Susan Frankle, senior analyst at market researcher International Data Corp. in Framingham, Mass. LAN server sales will almost double in the next few years, from 700,000 in 1992 to 1.2 million in 1996, she said.

A couple of forces are driving this trend, according to Frankle. First, companies are putting more PCs on networks and may need additional servers to increase performance. Organizations are also putting more network applications on-line. Rather than installing E-mail, fax and database applications on a single server, companies are installing dedicated servers for these types of applications.

Heady competition reigns within the LAN server market, which comprises X86- and RISC-based machines. Makers of multiprocessing super servers (Tricord Systems Inc., Parallan Computer Inc. and NetFrame Systems Inc., for example) are battling manufacturers such as Dell Computer Corp. and Compaq Computer Corp., which are trying to boost single- or dual-processor machines with network-management software, redundant drive arrays and overall system fault tolerance.

Compaq and Dell are doing more than putting PCs on steroids, Frankle said. “They’re trying to create bulletproof machines. As the machine becomes more powerful, it serves critical applications. You can’t have it go down on you,” she added.

DBMSs

As the Forrester report indicates, 1992 was the year that database-management systems caught on. However, most client/server database applications are still in the pilot stage, causing some reluctance among managers of information systems to rely on the new paradigm for mission-critical applications.

Part of the problem is the lack of administration tools available for client/server database systems (see related story, Page 92). While Oracle Corp., Sybase Inc. and other DBMS vendors work on adding tools similar to those available on mainframes, many corporate customers are relying on a two-pronged strategy of running mission-critical applications on mainframes and decision-support applications on client/server databases.

In addition to the lack of administration tools, database vendors have yet to perfect true distributed-database computing, in which databases on a LAN or WAN are constantly updated to reflect the latest changes. For now, administrators are skirting the problem by adding redundancy to LAN servers in the form of Redundant Arrays of Inexpensive Disks devices or uninterruptible power supplies.

Most analysts say the tools now lacking in client/server database systems will be added in the next couple of years, and the Holy Grail of distributed databases will be uncovered by 1995.

High-speed networks

Systems administrators in the throes of rightsizing must constantly be aware of technology beyond the horizon. Nowhere is this watch more promising — and confusing — than in high-speed networking.

Clearly, the momentum in the market is toward 100M-bps networking. The question is, how can corporations tap the power of emerging standards such as Fiber Distributed Data Interface (FDDI) without sacrificing their existing installations? One answer is CDDI, for Copper Distributed Data Interface — an alternative to fiber-optic networking that allows organizations to keep the copper wire they have installed in their buildings.

Even more promising is Fast Ethernet — a proposal for increasing the speeds of Ethernet networks from the current 10M bps to 100M bps. At the moment, however, Fast Ethernet proponents are split into two camps. One calls for extending Ethernet so that corporations can switch between 10M-bps and 100M-bps speeds, and the other favors replacement of a basic Ethernet layer to achieve the higher speed.

Many of the issues surrounding 100M-bps networking should be worked out within the year. Meanwhile, the existing state of the art — 10M-bps Ethernet and 16M-bps Token-Ring networking — is more than adequate for pilot client/server applications.

Get Smart With Your Client/Server Apps

clsvrEverybody knows how to build micro-based database applications, right? Just install the database program on the PC, spend a few minutes designing the database and then start writing the application.

Not in the client/server world.

With applications that involve both client and server software, quite a bit of behind-the-scenes work is required before developers can write a line of code. Unless they recognize the need for this work and schedule time for it, companies will find client/server projects inexplicably slow at first.

The client/server world involves three distinct entities: the server machine, the client machines and the network that connects the two. The server and clients must be able to communicate over the network before they can run any useful software.

Basic network connectivity

The first step is to establish physical connectivity between the clients and the server. Like the entire prep process, this is best done on a few pilot client systems. Once those systems are up and working, administrators can copy the working setups to actual client machines while minimizing the disruption for people using those machines.

To make sure the clients and the server are on the same network segment, install the appropriate network adapter in each client machine and connect that machine to the network with the appropriate type of cable. This task is best left to the network administrator. In some organizations, the clients and server machine will already be running on the appropriate network.

The next step is to establish connectivity between the basic network-protocol stacks running on the client and server machines. On the client side, this usually involves loading the network-adapter drivers for the appropriate network protocols. (Those would be SPX/IPX for a NetWare network, TCP/IP for a Unix network and so on.) Having some network protocols running on the clients may not be enough, because the database server may require a different network-protocol stack. Some database servers, for exampl e, are available on Unix systems but not as NetWare Loadable Modules; such servers often require TCP/IP protocol stacks on the clients.

Consider, for example, an organization that wants to plug a Unix-based database server into an existing NetWare network. The client PCs are running SPX/IPX, but the database server needs TCP/IP. Simply changing the clients to use only TCP/IP is probably not an option, because the clients need SPX/IPX for their existing file and print services.

The answer is to install on the client PCs a mechanism that lets the SPX/IPX and TCP/IP stacks coexist. One of the best ways to do this is to install drivers that obey an interface standard designed to work with multiple protocol stacks. Two such standards are Microsoft Corp.’s Network Driver Interface Specification (NDIS) and Novell Inc.’s Open Data-Link Interface (ODI).

Both NDIS and ODI drivers work the same basic way. Each loads a low-level driver for the adapter, then loads a piece of protocol-independent software. With this software, the client PC can run multiple protocol stacks, each of which talks to the same single protocol-independent layer.

The main drawback of this approach is RAM consumption. Administrators must load as much of this software as possible into high memory, or client PCs running DOS or Windows may run up against the conventional 640K-byte limit.

This approach can also be quite expensive, frequently in ways that are not initially visible. Switching to ODI drivers, for example, will work only if all the network adapters in the client PCs have network adapters for which ODI drivers are available.

Basic server connectivity

With the proper network protocols installed and the client systems able to connect to the server, the next step is to link the front-end program and the server-database software.

Most database-server packages include their own network drivers for this purpose. Those drivers, which are installed on the client machines, format network data in such a way that both the database server and its compatible client front ends can handle it.

Oracle Corp.’s Oracle database server, for example, comes with a driver called SQL*Net, which provides a layer of abstraction between all Oracle client front ends and the underlying network protocols. With NetWare networks, for example, SQL*Net uses Oracle’s SPX to link clients to an Oracle server.

These drivers often use one or more configuration files to control their operation, and each of those configuration files must be set up correctly. The drivers can easily consume 50K bytes or more of RAM, so they can aggravate the RAM-cram problem.

The result of this effort should be the client front ends successfully talking to the database server.

At this point on PC clients, however, RAM can be so tight that the front ends don’t have enough RAM to run. If that happens, it’s time to pause and go through all the usual steps to regain RAM: Consider using a memory manager, load high as much software as possible and so on.

Putting all the pieces together

When the RAM problem is under control, the next step is to set up user accounts and permissions in the database server.

First, log in to the database server. Some of the preceding steps may require the special privileges of an administrator account, but don’t be tempted to keep using that account while setting up the front-end packages. Use an account with standard user privileges to test the links between the front ends and the database server. Unless the eventual users of the client/server application will have special privileges, developers should not use privileged accounts.

Some databases even come with a built-in user account for this kind of testing. Oracle, for example, includes a user ID SCOTT with a password of TIGER for running tutorials and verifying that everything works.

After the log-on process, all that remains is to verify that a standard user account can indeed read and write to the database. The easiest way to conduct this verification is to use one of the sample database tables that most database-server vendors include with their products. When the front ends can manipulate one or more of these sample tables, all the pieces are in place.

A STEP-BY-STEP GUIDE TO HELP PUT YOUR CLIENT/SERVER DATABASE ON-LINE

1

In many organizations, the client systems already will be connected to the same network as the database server system. If they are not, install the appropriate LAN adapter and cables on the client machine.

2

Install the network protocol stack (for example, SPX/IPX, TCP/IP or NetBEUI) on the client that is needed to communicate with the database server. Again, in some organizations those protocols already may be running on the clients.

3

Ensure network coexistence. This is necessary when the clients are running a network protocol, but not the one needed to talk to the database server. A network protocol coexistence standard –such as NDIS or ODI — will enable the client’s network adapter simultaneously to run both its current protocol stack and the protocol stack required by the database server.

4

Install the software that communicates between the database front-end applications and the database server’s protocol stack. For Oracle servers on a NetWare network, for example, the SQL*Net drivers for SPX/IPX are necessary.

5

Check for sufficient RAM. On PC clients, adding all these pieces of software — network protocol, network coexistence and database connectivity programs — can cause RAM shortages. To have enough RAM to run the database front-end application, the installer may need to regain RAM by taking such steps as loading software high and using a memory manager.

6

Establish a database user account. Use a standard user account, not a privileged administrator account, for all testing. Either use an available default user account that the database server includes or create a user account for development and testing.

7

Log in to the database with the selected user account. Verify that the account and the other developer accounts can work with sample database tables.

8

Get to the “real” work. Start designing the database, laying out the prototype screens and building the client/server application.

Middleware Makes Good On Its Promises

mdwImagine seamlessly accessing data from IBM AS/400, HP 9000 and PC servers, regardless of the type of client platform. Better still, imagine developing an application that runs on multiple clients and integrates with these servers without prior knowledge of what those platforms are.

Creating such a truly distributed computing environment is the goal of a growing class of software called “middleware,” which provides a common interface for diverse front ends and servers. With so many approaches to client/server computing and a diverse installed base, the demand for middleware is increasing. By definition, client/server applications demand the seamless integration of software across computer platforms, operating systems and networks.

One new middleware product slated to ship next month is Oracle Corp.’s Oracle Glue for Windows, which provides access from Microsoft Corp.’s Visual Basic and Excel, Dynamic Data Exchange-enabled applications and programs that support dynamic link libraries to Oracle and IBM DB2 servers, Borland International Inc.’s dBASE and Paradox files, and Sharp Electronics Corp.’s Wizard 7000 and 8000 series electronic organizers.

“There’s an overwhelming need for middleware,” said Tucker McDonagh, managing director of Tucker Network Technologies Inc. in South Norwalk, Conn. Standards emerging in the market, such as the Vendor-Independent Messaging API (application programming interface), will “relieve some of the complexity [of disparate environments], but middleware will always be a requirement,” he said.

Middleware provides an API for exchanging messages between pieces of distributed applications, McDonagh said. Some programs also can work with multiple protocol stacks, so separate applications can interoperate across unique LANs and WANs.

McDonagh divides middleware into five groups: messaging products with an API and communications services for distributed applications; remote procedure calls, with an API that works over multiple protocol stacks; database/data-access packages, which link to various databases; on-line transaction-processing programs, which add error-control, security and recovery aspects; and traditional middleware, or file-transfer software.

Developers of client/server software are providing links to multiple clients and back ends. Trifox Inc.’s Vortex, for example, includes software that allows users to access a variety of database servers, such as Ingres, Sybase, Informix, Oracle and Ultrix/SQL.

Oracle Corp. also offers a software suite that eases accessing multiple clients and back ends. With Oracle Card, a front-end development tool, both PCs and Apple Computer Inc. PowerBooks can access an Oracle database, according to Frank Naeymi-Rad, MIS director for the University of Health Science at the Chicago Medical School.

“Oracle Card provides us with one platform for running both PCs and Macintosh computers,” he said. The application allows medical students and doctors at Cook County Hospital to use PowerBook notebook computers to enter data while at a patient’s bedside and transfer it over a network that is also accessed by DOS PCs.

In other cases, middleware serves as the catalyst for creating a distributed network. The state of Alabama, for example, is developing a system that uses Information Builders Inc.’s (IBI’s) EDA/SQL to link OS/2-based clients at 43 prospective counties with an IBM DB2 mainframe server.

“We run multiuser EDA/SQL at the gateway to ship requests to the host for updates to the DB2 server,” said David Murrell, information-systems manager for the state’s advanced technology group in Montgomery. “We really use EDA/SQL to accomplish a distributed-database environment.”

EDA/SQL stands out among database tools because it can be used with more than 100 front-end tools to access more than 50 types of databases on 35 different platforms.

While most middleware currently supports specific protocol stacks and applications, industry giants intend to establish all-encompassing standards. Digital Equipment Corp. (DEC), for example, engineered Network Application Support (NAS), an architecture and products designed to provide the ultimate level of open systems.

DEC has more than 90 NAS products for nine hardware platforms. Examples include its Compound Document Architecture for documents with graphics and text and Application Control Architecture Services, which provide object-oriented tools for communicating between applications.

Third-party developers have also embraced the NAS architecture. More than 3,000 NAS applications are available from such firms as Computer Associates International Inc., IBI and Microsoft.

Digital Communications Associates Inc. (DCA) also plans to unveil its own platform this spring. The firm recently announced a universal communications architecture to provide consistent access, features and APIs. DCA’s first product based on this architecture will be QuickApp, an application-development tool that shields programmers from communications-transport protocols, including LU 2, LU 6.2 and LU 0.

Users of tools such as Visual Basic and Mozart Systems Corp.’s Application Renovation development software will be able to create end-user applications without knowledge of communications software, according to DCA officials in Alpharetta, Ga.