One of the Two Essentials for Computer Timesharing to be Viable-The Origins of Tymnet: Version 2

To be viable commercial computer timesharing requires two technologies: a computer capable of serving many simultaneous users such that they each think they have the computer all to themselves and a reliable and ubiquitous means to connect many remote users to that computer. This post is about the latter. It the first of several by Norm Hardy on Tymnet. Norm was a creator and developer of Tymnet from its very beginnings.


This is Part 2


network map

Development of Tymnet II

By Norm Hardy

As Tymshare acquired more and different host machines, it became even clearer that upgrades were needed to the design of Tymnet.  The PDP-10 arrived in 1970 and was adapted to Tymnet by Bill Weiher so as to avoid the expensive “per-user” communications equipment from DEC.  Any programmer will tell you that one program knowing addresses in another is a very bad idea, but was the case in Tymnet 1. It worked pretty well but did limit architectural progress.

Laroy broached the idea that the supervisor would merely construct a needle which would pass thru the net guiding the creation of a new circuit. The needle would carry a 4 bit nibble, for each node, to steer the needle as it passed that node during the circuit construction. The supervisor would still need to know the network topology and how each node numbered each link. It would also know the node and link loads. It would no longer need to model the routing tables in each node. Routing table entries would be allocated locally. Twenty-five years later this pattern became the main notion behind MPLS routing (Multiprotocol Label Switching). The cost of core memory for nodes had come down and the new node code was larger but more modular and efficient. At this time nodes typically had 8K or 12K of 16 bit words. A few had 16K.

Once a particularly nasty bug struck the New Orleans node. Usually node bugs would have the civility to strike first, or eventually, near Cupertino, but this one took drastic measures. We loaded an extra core bank with the code for New Orleans and mailed it there in a shoe box. Upon the next crash they swapped core and mailed the core module with the crash state back to Cupertino. We found the bug.

When Tymshare opened the Paris office with its own 940 computer, there was consternation over the idea that the European network would be managed from the states. This consternation was largely overcome by rewriting a few manuals to describe the operation differently. The Paris 940 ran a supervisor so that when the trans-Atlantic link was out they would still be in service. Once when all of the American supervisors had gone down, (a very rare occurrence) the Paris node tried to take over the American network via a 2400 bit/sec link. It failed for by that time 2400 bit/sec did not suffice to control even a night load in the states.

Tymnet Service

Our timesharing computers attracted business now by those who found it convenient that their program and its data were accessible from a large number of geographical sites. Some customers, however, found our timesharing computers inadequate to their tasks, even with special pricing. There was increasing call, inside Tymshare and out, to make Tymnet serve hosts other than our own. We gradually did this. This required a considerable programming staff to work with owners of diverse hosts to connect those hosts to Tymnet.  Many techniques were used including DMA (Direct Memory Access).

This new business was easy to start because the simplest host interface mechanism was the early cable per connection configuration that required no changes to the host; just power, floor space and termination of a leased line to our site. The prospective customer had very little up-front cost to impede him. After the first day his computer would be accessible by a variety of terminal types from a variety of US cities and even several foreign countries. Here is a memento from the NLM, and some perspective from our first customer. Also see a contemporary directory of institutions with computers connected to Tymnet.

All through this development new varieties of terminals were coming to market. They came with higher baud rates and peculiar timing requirements. We would teach Tymnet about these new terminals and the node near the terminal would take care of the peculiarities. Other users of these terminals would have to adapt their host software to accommodate these strange timing requirements. It gradually dawned on us that the adaptation of various hosts to various terminals was a strategic marketing advantage.

With the advent of Tymnet service the Division became a company in its own right in 1979. After McDonnell Douglas bought Tymshare, and then sold it to EDS, Tymnet was sold to British Telecom (who was simultaneously merging with MCI).  BT renamed Tymnet as “Global Network Services.”   At its peak there were 6600 nodes in Tymnet, at nearly as many locations.

Don Johnson, who has been connected with Tymnet for many years, said on Monday, March 24, 2003, “We shut down Tymnet at 11:48 AM PST this morning.”


One of the Two Essentials for Computer Timesharing to be Viable-The Origins of Tymnet: Version 1

To be viable commercial computer timesharing requires two technologies: a computer capable of serving many simultaneous users such that they each think they have the computer all to themselves and a reliable and ubiquitous means to connect many remote users to that computer. This post is about the latter. It is the first of several by Norm Hardy on Tymnet. Norm was a creator and developer of Tymnet from its very beginnings.


This is Part 1


network map

The Origins of Tymnet – Tymnet Version 1

By Norm Hardy


Tymshare began providing timesharing service on the SDS 940 in September 1966.  The machine came from Scientific Data Systems with “CTE equipment” that attached about 16 terminals or modems, each with an RS232 voltage mode plug. Until recently modems had the same shape plug. The only practical terminal then was the model 33 Teletype from Teletype Corporation, a subsidiary of AT&T. It ran at 110 baud yielding 10 characters per second. It was upper case only, heavy and noisy. It was also cheap and reliable.

During the first year or so customers dealt with the phone company to reach the computer center in Palo Alto, California. Most were in toll free calling radius. Tymshare soon established another computer center near Los Angeles and developed a largely disjoint set of customers there. It planned further expansion.

There were frequency division multiplexers on the market that would handle about 16 Teletype circuits over one voice grade line. Each user was assigned a frequency and full duplex leased voice lines would connect the multiplexers. The company tried these but they were expensive and error prone. Mini computers, costing several thousand dollars, were then becoming available and it seemed clear that they could do a variety of communication tricks which would soon pay their way in saving communications costs.  I joined Tymshare in Q4 1967 from IBM and LaRoy Tymes joined in Q1 1968 from Lawrence Livermore labs where we had met in the early 1960s.  Because of our experiences together there we were both eager to pursue this adventure.  Howard Steadman, already at Tymshare, dealt in hardware and encouraged us to suggest innovative solutions to interfacing with modems. Dave Schmidt was vice president of Tymshare and understood the technical and economic advantages to Tymshare of these schemes. LarRoy wrote almost all of the early Tymnet code. He and I did the design.

At that time the phone company (AT&T) was considered a natural monopoly. They held that modems were the sole province of the telco. Tymshare had already produced acoustic couplers that competed with the telco’s 103A data sets (data set = modem), but AT&T’s lawyers were busy fighting what they considered to be more serious encroachments on their turf.

Initially we found a strange little computer called the SPC12. It had 4K of 12 bit words. Most significantly it had parallel input and output, 12 bits wide, which could be interfaced with modems with simple voltage converters. The software would prepare and time the signals and there was no hardware devoted to timing the bits for each communications line. Precursors to this plan had been done at Livermore and in military applications before that.

The first deployment used the SPC12 as a fixed time division multiplexor, providing full duplex service for 29 terminals, each running at 110 baud. Pairs of such multiplexers were connected with a voice grade line, leased from AT&T, and two AT&T 201 data sets at 2400 bits/sec. There was no error control. These were deployed for a short while. We found the SCP12 too small and had located another machine, a Varian Data Machines model that used 16 bit words, and which could be expanded beyond 4K words. This machine, and its direct descendants, served as the Tymnet workhorse for the next several years. Howard Steadman equipped the machine with parallel I/O like the SPC12. We programmed this machine to do statistical multiplexing which took advantage of the fact that not all lines ran continuously. This allowed more users per leased line. We added backward error control on the multiplexed lines.

With this phase of Tymnet (not yet so named, however) pairs of nodes would be connected by a leased line. One node, the base, would sit next to the 940 computer, connected thereto by many cables. This still required the CTE equipment. The other node would typically sit in some Tymshare sales office. That office could access only that one 940 computer.

Several developments then began. Ann Hardy began the modification to the 940 operating system to supplant most of the function of the local node, thus avoiding work both in the 940 host and in the 620 node. The 940 and 620 exchanged teletype traffic in a portion of 940 host memory to which the 620 had read-write access via hardware we had built. LaRoy reprogramed the 620 to send and receive TTY traffic there. With this change the CTE equipment was retired and one cable connected the host to the Tymnet 620 base.

About this time the remote echo issue bubbled to the top of the queue.

We also began to introduce forwarding logic in the nodes so that traffic could travel through several nodes, each hop via another leased line. This required routing which was provided by static routing tables. A dial-up access port would reach the same host for weeks at a time, but at least we had much greater flexibility in allocating ports to hosts. Sales spent many painful meetings devoted to this allocation.

We had been thinking about dynamic routing tables to be built as new users logged in. We had no computer with enough power to do this in a brute force way. We could have put it on a Varian machine but that lacked sufficient memory. We chose to program it for the 940 with a slightly modified version of the operating system. Laroy wrote that program using algorithms to dynamically sense the network topology. That first “Supervisor” knew several versions of the node software and the sizes and absolute addresses of routing tables in each of those versions. The supervisor would send out “data-grams” to cause the distributed nodes to change their routing tables. Several 940s (at least two) would simultaneously host supervisors who would politely agree among themselves which was to supervise the network. If such a supervisor or its host crashed, another supervisor would notice, discover the current net topology and assert control. This might take a few minutes, during which current sessions were unaffected but new users could not log in. Most days went by without such disturbances.

I recall bringing up the first supervisor.

Tymnet was running well and already serving customers very well. Circuits were laid out manually much as AT&T would manually engineer leased lines to be rented by customers. That was tedious and often involved a few weeks response time.

We had vaguely imagined from the beginning of Tymnet the real-time automation of new circuits. The SDS 940 was the largest machine we had available (192 KB) and so we used it to dynamically setup circuits at the beginning of each login session. The new program was called the supervisor.

We converted a small fraction of Tymnet, perhaps 20 nodes around Cupertino, to the new system. With the new system any dialup port would be able to reach any timesharing host. We went thru the normal debugging stages which were eased since the nodes were all local. There were manifest difficulties in arranging yet larger test networks. The flexibility of switched circuits was badly needed. The cost of the switchover included manually reloading a few hundred nodes in many cities.  We finally decided in November 1971 to bite the bullet and convert the whole network. One Friday evening as the timesharing load waned, we took down the whole network by beginning to load the new Tymnet node code into each of the many nodes.

The 940 struggled valiantly reporting the number of nodes whose connections it had discerned. We waited and would take occasional steps to help it along by adjusting heuristic parameters. I am not of much use after midnight and I went home. Laroy (Tymes) stayed on. Early the next morning I came in and found all of Tymnet up with customers. I assumed that Laroy, who had written all the code, had found and fixed the problems. An hour or so later Laroy came in and assumed that I had found the problems that morning. The supervisor had not been able to come up while we were still there and meddling. After Laroy had finally left the supervisor did its methodical thing and had mapped and taken over the whole net.

Of course other bugs were found and improvements were made but Tymnet remained supervised from that point on.

One day in a marketing meeting, Virgil Swearingen suggested the name “Tymnet”. Few liked the name but it was clear that it was the best that had yet been suggested. It improved with time.

That was Tymnet I. It is described here at this link too, fairly well.


The Cloud Before it was Called the Cloud- Segment 1, Part 2

The Cloud Before it was Called the Cloud Segment 1 -Part 2

This is the second part of the first segment on “The Cloud Before it was Called the Cloud”

The entire series will cover aspects of the computer timesharing industry from about 1960 through 1985, and how the requirements from that era and industry drove the development of software, process, applications, networks and other technologies that made a lot of today’s modern cloud computing possible.

Our intent is to give you a view into the kind of circumstances that made using a computer timesharing service a great solution and alternative to the corporate computing services the typical large company offered in the 1970s. And how those demands were met.

The goal for Part 1 and 2 is to illustrate the general state of computing in that era and the then radical changes made available because of computer timesharing.  I  described my own involvement in very specific real life business challenges needing a solution:

  1. Gather in-house semiconductor sales data to generate an accurate report to an industry association by a monthly deadline.
  2. The second challenge was the payoff from the reporting-analysis of the consolidated industry reports to make company business decisions.
  3. Provide an interactive environment for fast and regular completion of both tasks
  4. Reduce the time cycle for sending and receiving reports so that subsequent analysis was as timely as possible
  5. Gain the capability for ad hoc analysis and reporting to enable even faster and better decisions


Part 2 talks about how computer timesharing made a good solution possible.


Part 2

Fairchild Market Research and Planning (MRP) had elected to write computer applications themselves for 1) association reporting commitments and 2) in-house data analysis and custom reports. There was a MRP department employee who chose a solution enabled by using a computer timesharing service. He selected Tymshare as the service provider and Super Basic as the language in which it was to be written. The guy doing the project was only part way through implementation when he left the company! Nobody else knew much about what he was doing, and there were no design documents or other notes available. The MRP folks approached Product Marketing looking for someone to help them complete this project. I heard their requirements and was intrigued by the challenge. As a result I signed on for what was to be a six month commitment to get the project completed and able to operate on an ongoing basis every month in the future.

At that point my total experience with computing had been three courses I took at Georgia Tech as a part of obtaining my EE degree. In those courses I had programmed in ALGOL (a FORTRAN like language.) I had no prior knowledge of Super Basic or any of the other resources Tymshare offered that were needed to get this done. This is where the simplicity of Super Basic, its self documenting nature and the excellent training and documentation from Tymshare made up for my lack of knowledge.

I plunged in, took some classes from Tymshare in Super Basic and its OS command level Exec plus Editor, the interactive application for editing code. The Tymshare service model included teams of Tymshare field sales and support people that learned what each customer was doing and needed, and not only advised them on how to do it but often directly helped get customers through tough spots.

The first task was to complete the data entry and device type consolidation process so we could meet the reporting deadlines for the monthly association report.  And reduce errors while dramatically cutting down the time to complete this task. We chose  Data Point CRT terminals with mag tape cassette for the data entry (our own MIS could not supply us with a computer readable version of the company raw sales report.) We took delivery of the very first ones delivered and immediately put them into use. The terminals and mag tapes allowed data entry, verification and editing offline.

When the monthly data was all entered on tape the Data Point terminal was then connected to Tymshare’s network and computer via modem, and the data was uploaded to our department Tymshare account and then formed into sales data files. The first application I wrote processed the raw unit and dollars data given to us by MIS into product groups by generic device type. Next it formatted the device types and accompanying sales data to fit the report format we had to submit to the industry association. And finally we produced a properly formatted report using a small printing terminal. Then the report was off to the association. They were not able to take a computer file of the report!

When in two weeks or so we got a consolidated industry report back from the association the next phase of the work began. The report received was entered into our Tymshare account disk files much like the company data was earlier-typing it onto the mag tape cassettes and then uploaded. It  then became a pretty simple task to use that consolidated industry member’s data and our own data to produce a number of standard monthly reports and analytics that were top priority for Fairchild management. Using matrix operations we pretty much divided the Fairchild matrix (units and dollars by generic device type) by the consolidated industry matrix to produce market share reports by device type. By virtue of previous months data we stored in our account we could also plot historic price and volume graphs for Fairchild and the industry.

We reduced this report/analyze/report cycle from almost exactly one month to about a week. Now the internal analysis reports were three weeks newer than before, enabling timely analysis and decisions that could help drive the business. Three weeks sooner was critical. Tymshare and computer timesharing helped make us heroes!

The biggest resulting challenge was the ensuing demand by management for ad hoc reports and analysis. Now that they had data much sooner than before they thought of a number of other cuts of the data they wanted. This was my introduction to what is of course a constant and the norm in computing: improve the delivery of data some and the result is a request for more!

Depending on market conditions, success, failures, competitors and Fairchild’s new product introductions and production problems there were many new demands for reporting that differed greatly each month. At that time (1971) Tymshare had no general report generation facility. Each report had to be written in Super Basic. Even though portions of previous report code could often be reused along with new code there was no easy or super quick way to supply those newly requested reports. But even so it was much, much faster than the old manual method so it was deemed acceptable.

Early versions of report generators would come along within two years but it still amazes me that neither I nor any of the folks at Tymshare I worked with saw that we could generalize some of the code we had already written to allow new reports to be produced faster each time they were needed, It was an obvious business opportunity we were blind to but we were too happy with what we were already accomplishing for the task at hand to see it!

Here’s a summary of the critical pieces that Tymshare’s (and other vendors) products and services provided that allowed the initial goals in MRP to be accomplished:

  1. The availability of an on-demand computing service to build, operate and deliver applications to end users
  2. Languages that were interactive so that building apps was fairly quick allowing testing and completion to occur without waits for periodic ‘batch’ runs which revealed errors and problems that could seriously set back results
  3. A highly reliable computer network that allowed the app building, processing and reporting to be done from any location at work or home with a phone line – via a modem and a terminal. The incredible hours I put in to develop the programs and maintain them were sometimes able to perform from home or other locations.
  4. Terminals to fit each need from off line entry and editing, online uploading, verification, reports and analysis
  5. At our end-user site help and advice from the team assigned by Tymshare to my account. These people did the same support for other customers and were often able use that experience to suggest better and more efficient ways to accomplish our computing needs
  6. A behind-the-scenes operations staff at Tymshare. They kept the computers and network running and available more or less 24/7. They did regular tape backup for all customer accounts so that system problems or human error would not have much effect. For massive data requirements they could also accept on-site data input in mag tape or card formats
  7. A recurring schedule of classes locally for each of the major languages, the Tymshare operating system and subsidiary applications like Editor.
  8. Superb easy to read documentation made for non-computing professionals. There were quick summary guides and reference manuals all very readable and with clear examples. They differed night and day from what was being supplied by the mainframe vendors to their MIS customers.
  9. Evolving technologies that added help to building and operating applications. The first of these was a built in ‘Sort’ capability that allowed an application program to call it, point it to a data matrix for sorting and return a correctly sorted data set. Before that the sort function had to be written into each program! Availability of several other related vital functions followed rapidly-merge, select, delete and report. Program operation became faster and maintenance was easier.

That’s our first story of a real life need and how computer timesharing provided a solution! Upcoming segments will cover aspects of these enabling services and technologies plus many others as well. We hope you will participate wherever you wish by commenting in the blog or contact us to post a new segment of your own.


The Cloud Before it was Called the Cloud- Segment 1

network map

Today’s Cloud provides resources, access and data that makes it indispensable. But virtually none of the services that evolved to today’s Cloud existed before computer timesharing came into being and became available to the end user computing population. Computer timesharing?  Did this involve enduring forceful sales presentations while holding a complimentary Mai Tai and ending with a lifetime obligation to pay for a week a year at a condo in Hawaii? Well, in a word-no!

This example (in two sections) may give you an idea of the kind of real industry requirements in the 1970s that made using a computer timesharing service a great solution and alternative to the corporate computing services the typical large company offered its own organizations. And what those requirements did to cause the fundamental pieces of today’s Cloud to be developed and enhanced.

For this first segment I’ll give you an example of the scenarios that drove the computer timesharing industry to great success, by recalling my own involvement in these very specific real life 1970s computing challenges needing a solution. The overview of the business problem that existed for me:

  1. Gather in-house sales data to generate an accurate report to an industry association by a monthly deadline.
  2. The second challenge was the payoff of the reporting-analysis of the reports to make business decisions.
  3. Provide an interactive environment for fast and regular completion of both tasks-gathering accurate data and analysis
  4. Reduce the time cycle for sending and receiving reports so that subsequent analysis was as timely as possible

My own experience with computer timesharing started with a classic case of a corporate departmental need to get some specific computing done on a recurring basis. In late 1971 Fairchild Semiconductor was a technology, sales and marketing company that offered by that time a huge number of product families and device variations. The industry was very competitive so prices and market share were under constant scrutiny. Fairchild had been an early leader of the industry but was in danger of losing its competitiveness. Many critical sales and marketing strategies were affected by what these numbers indicated.

Fairchild’s Market Research and Planning group was located in Mountain View, California. They did research of their own to suggest trends and strategies but they were often asked to respond to questions from exec level senior management plus others in sales and marketing. Quick answers to some of those questions were very important but not something easy to do in a lot of cases.

The biggest source of meaningful data was from a semiconductor industry association that virtually all the major semiconductor manufacturers belonged to. On a monthly basis each member company reported unit and dollar sales for each device type they produced. The device types were defined by the association so that reporting could be uniformly accomplished no matter what the naming and numbering nomenclature used by each member company.

So the task each month for Fairchild was to first produce a report for the association of standard device type, units and dollars from an in-house report that covered each Fairchild proprietary device number. The first big time-consuming and error prone task was a lot of crunching of Fairchild device categories resulting in a paper report on a form supplied by the association. These reports were sent to a representative of the association who held each companies individual data in secrecy. The reports were consolidated into one big report that aggregated all the association members’ sales by generic device type. There was no identification of any individual company’s sales numbers-just the group’s totals. As a member you received back a copy of this report

Then analysis and in-house reporting could begin. Since you knew what you reported by generic device type and you now had last month’s industry figures by generic device type you could make all kinds of calculations that showed how Fairchild was doing relative to the competition. Standard monthly reports were produced around market share and price/unit trends but there were also a lot of reports requested on an ad hoc basis depending on the business climate at the time. Both of these processes were almost completely manual, utilizing a group of clerks who used the in-house reports to construct the industry report, submit it. then after sitting on their hands for a few days waiting for the return of the consolidated report begin work to generate in-house market share and other reports as needed. Except for the monthly Fairchild sales report (stacks and stacks of fan fold line printer output) which was produced by Fairchild’s mainframes and Management Information Systems (MIS) department, the rest of the processes were manual and at best helped by electric calculators! That is the way it was for virtually every device manufacturer.

My involvement began one day when I was at my very first job fresh out of Georgia Tech and working for Fairchild Semiconductor (and Mike Markkula.)  The head of Market Research and Planning (MR&P) came into Digital Product Marketing where I worked in Medium Scale Digital Devices as a product marketer. He was looking for someone to help complete a computer automation task that was suspended in midstream. MR&P had already made a decision that the current manual system described above had to be changed drastically. They had found MIS could not do the interactive parts of the requirements at all, and the application would only be addressed sometime in the future when the backlog of other MIS projects allowed. In those times that was normal. New computer applications often took months before they could be assessed for requirements and design by MIS. And even longer for implementation. Plus ongoing support for operations and the inevitable changes and enhancements posed an even bigger problem.

To be fair most company’s MIS groups were running as hard as they could to stay up with requirements for core company needs: manufacturing scheduling and reporting, sales reporting, payroll (although most came to rely on third party vendors for that) and other basic business needs. To get them to focus on a departmental need required some juice. And even then the line of projects in front of you could be long. They had small staffs for operations and even smaller for evaluating and building new applications. Support for those applications was also sparse. The computers of the time were mainframes that required a special environment virtually all contained in a glass room with raised floors. Power and air conditioning had to be special. Access was as a matter of necessity limited. Remote access was essentially unheard of. Programs were scheduled like buses or airline flights and run as ‘batch’ jobs at mostly designated times. Applications of lesser importance were at the mercy of the more important applications deemed ‘critical’. So you can see why there was an opportunity for a better answer and why people embraced them.

That is the reality of available choices for Fairchild’s MR&P to decide how to meet company and department requirements. Next installment I will tell you what was actually done and how it was accomplished with computer timesharing versus in house MIS.

The Cloud Before it was Called the Cloud

“The Cloud” has slightly different meanings for different people and in different contexts. But in most cases it is the term for the rich and powerful environment that provides access to applications, content and data from resources most of us do not even have to know or care about. From the end user perspective it’s just there and available. And accessible from almost any device you can name from desktop, set top, portable, handheld or any number of other devices. It is powerful, convenient, cost effective and gets our requests fulfilled.

But did you know that today’s “Cloud” is just a more evolved and ubiquitous form of another technology-enabled environment that existed in its prime in the mid 1970s and had its beginning more than 50 years ago?  It was called Computer Timesharing. Stay tuned right here for the first of an upcoming multipart blog that will bring you descriptions of why it came into being, what it was, how it was used and how it was powered. It changed many key aspects of computing from networks to customer support-pioneering things we take for granted today!

This is a blog by people that were there as developers, providers, support people, operations people and end users. The users were scientists, engineers, computer professionals, business people and government agencies. And many others. It touched them all, gave great value and led the way to today’s Cloud…coming January 30!


Do you have firsthand experience from the 60s or 70s in computer timesharing? Think you might have a story or commentary that could be a new blog posting in our series and evoke comments from others? If so please contact us to discuss getting it into a blog and adding to our recounting of computer timesharing and its role in developing critical technologies, services and policies for today’s Cloud…

Information Technology Corporate Histories Collection

The Information Technology Corporate Histories Collection was created under a grant from the Alfred P. Sloan Foundation.  The SI SIG’s predecessor, the Software History Center, partnered with the Computer History Museum and the Charles Babbage Foundation to administer the grant to develop the website and establish the collection.

Materials from 56 software and services, data storage and semi-conductor companies are included in the collection.  These include personal anecdotes, company documents, and company timelines.  The collection includes an overview description of each company, 1,955 timeline events, 2,021 documents, 307 personal stories of people active in these companies, and 165 references to related materials available from other sources.

Substantial materials have been collected for the following software and services companies.  To view the collection for each company click on the company’s name.

Adobe Systems, Inc,
AGS Computers, Inc.
Apple Computer
Boole & Babbage
Cincom Systems, Inc.
Computer Sciences Corporation
Computer Task Group
Computer Usage Company
Digital Research Inc.
General Electric Information Services (GEIS)
International Computer Programs, Inc. (ICP)National CSS
Peter Norton Computing
Ross Systems
Software AG North America
Software Design Associates
Software Publishing

To see a complete list of all the companies included in the Information Technology Corporate Histories Collection, click here.

Materials Collection

Our mission to preserve the history of the software industry includes encouraging organizations and individuals who have historical materials in their files to donate those materials to an archival organization where they can be preserved for the future and also be catalogued and indexed to make them accessible to researchers.

Individuals and organizations who have made donations of substantial collections of historical materials as a result of our efforts are the following.

Donated to the Charles Babbage Institute, Minneapolis, MN (

ITAA (now known as TechAmerica): ADAPSO documents

Lawrence A. Welke: ICP publications and documents

Joan Wessel: Milton Wessel’s ADAPSO documents

Donated to the Computer History Museum, Mountain View, CA (

Carl Baltrunas: Tymshare/Tymnet documents and objects

Roger Dyer: GEIS documents

Werner Frank: Professional papers and Informatics documents

Burton Grad: Professional papers and GE, IBM and Burton Grad Associates documents

Karol Hines: Ross Systems documents

John P. Imlay, Jr.: Professional papers and MSA documents

Rich Lynn: Tymshare documents

Robert MacDonald: Informix documents

Gary Morgenthaler: Ingres and Tymshare documents

Tim O’Rourke: Tymshare documents

Robert Patrick: Professional papers

Lawrence J. Schoenberg: Professional papers and AGS documents

Materials for the Computer History Museum archives have also been donated by attendees at our Pioneer Meetings and contributors to the IT Corporate Histories Collection. These collections consist of a number of donations each too small to be listed individually but combined with other donations, they comprise a significant collection for each of the companies or organizations listed below.

AGS Computers




Cincom Systems

Computer Usage Company


Digital Research, Inc.





NACCB (National Association of Computer Consultant Businesses)


Ross Systems

Software AG

Software Design Associates

Software Publishing


Updata Capital, Inc.


Relational Database Management Systems: The Business Explosion

EEE Annals of the History of Computing, Vol. 35 Number 2, April-June 2013
This special issue (part 2 of a series which began with the special issue in October–December 2012) tells the history of how IBM and several new, independent software companies built companies that supplanted the database management system companies and their DBMS models in both query-oriented usage and in many transaction-processing applications. The story of this transformation describes how each of these pioneering relational database management companies developed and marketed their products to meet the relational challenge and how well they succeeded. The result was explosive business growth and creation of five companies with more than $1 billion in sales. This special issue focuses on the growth of four of the leading RDBMS companies, with recollections by the pioneers about the history of the companies that they worked for: IBM, Oracle, Informix, and Sybase. Burton Grad was the guest editor.
The articles and the authors of each are listed below. Click on the name of the author(s) to see an abstract of the article on the IEEE website.

Burton Grad, Guest Editors’ Introduction: Relational Database Management Systems: The Business Explosion

Andrew Mendelsohn, The Oracle Story

Rick Bennett, Oracle Marketing:  Killer Ads

Bob Epstein, History of Sybase

Roger Sippl, Informix: Information Management on UNIX

Donald J. Haderle and Cynthia M. Saracco, The History and Growth of IBM’s DB2

Marilyn Bohl, Product Managing DB2’s Early Development

Hershel Harris and Bert Nicol, SQL/DS:  IBM’s First RDBMS

Donald R. Deutsch, The SQL Standard:  How It Happened

Relational Database Management Systems: The Formative Years

IEEE Annals of the History of Computing, Vol. 34 Number 4, October-December 2012
This Annals special issue tells the story of how the transformation to RDBMSs began and describes how three companies pioneered the development of relational database management products to meet the relational challenge and build the foundation for the growth of a multibillion dollar industry.  It includes six articles by industry pioneers recording this early development of RDBMSs.  It also includes two articles by prominent computer historians (David Alan Grier and Martin Campbell-Kelly) which provide historical context and interpretation of the events described by the industry pioneers.   Burton Grad was the guest editor.
The articles and the authors of each are listed below. Click on the name of the author(s) to see an abstract of the article on the IEEE website.

Burton Grad, Guest Editors’ Introduction: Relational Database Management Systems: The Formative Years

David Alan Grier, The Relational Database and the Concept of the Information System

Martin Campbell-Kelly, The RDBMS Industry: A Northern California Perspective

Hugh Darwen The Relational Model: Beginning of an Era

Bradford W. Wade and Donald D. Chamberlin, IBM Relational Database Systems: The Early Years

Bradford W. Wade, Compiling SQL Into System/370 Machiine Language

Robert Preger, The Oracle Story, Part 1: 1977-1986

Lawrence A. Rowe, History of the Ingres Corporation

Donald D. Chamberlin, Early History of SQL

Mainframe Software: Database Management Systems

IEEE Annals of the History of Computing, Vol. 31 Number 4, October-December 2009
This issue is focused on the products, companies, and people who designed, programmed, and sold mainframe DBMS software products beginning in the 1960s and 1970s.  It includes eight articles by industry pioneers recording the history of significant database management software products as well as two articles by prominent computer historians (Tim Bergin and Thomas Haigh) which provide historical context and interpretation of the events described by the industry pioneers.   Guest editors were Burton Grad and Thomas J. Bergin.
The articles and the authors of each are listed below.  Click on the name of the author(s) to see an abstract of the article on the IEEE website.

Burton Grad and Thomas J. Bergin, Guest Editors’ Introduction: History of Database Management Systems

Thomas Haigh, How Data Got its Base:  Information Storage Software in the 1950s and 1960s.

Thomas J. Bergin and Thomas Haigh, The Commercialization of Database Management Systems, 1969-1983

Charles W. Bachman, The Origin of the Integrated Data Store (IDS): The First Direct-Access DBMS

Thomas M. (Tom) Nies, Cincom Systems’ Total

Robert L. Patrick, IMS @ Conception

William C. McGee, The Information Management System (IMS) Program Product

Robert L. Brueck, System 2000: The MRI Systems Corporation

Orrin Stevens, Jr., The History of Datacom/DB