The Pages of Tech History

A look at the evolution of technology from the pages of Computerworld

In 1964, a young Patrick J. McGovern founded International Data Corporation out of his house in Newton, Mass., a modest start to what would become a global publishing empire.

McGovern’s company, eventually renamed International Data Group or IDG, launched its first publication in June 1967, Computerworld. The magazine went on to be the flagship publication of IDG’s nearly 300 titles, including PCWorld and Macworld. With its staff of tech reporters and editors, and commitment to high-quality journalism, Computerworld quickly became the go-to source for the latest news and information for technologists, computer scientists, and consumers. In the process, Computerworld has chronicled the history of the computer revolution.

Boxes of magazines
The McGovern Foundation retrieved dozens of boxes of old IDG publications from storage and has worked with the Internet Archive to scan and preserve them for the public.

The McGovern Foundation has embarked on a project to preserve and share that history by working closely with IDG and Internet Archive, a non-profit digital library. Three years ago, the Foundation retrieved dozens of boxes of IDG print magazines stored in a Framingham, MA warehouse. Since then, an Internet Archive team housed in the Boston Public Library has been digitally scanning the main print publications of IDG. 

Computerworld is now fully scanned and the issues can be viewed and downloaded as PDFs at Archive.org. As a part of an ongoing curation of this historical research, the Foundation will continue to bring tech history stories inspired by Computerworld. The series is authored by McGovern Foundation intern Kathleen Esfahany, a rising junior at MIT majoring in computation and cognition.

Tech History Stories

First issue of Computerworld
The first issue of Computerworld magazine.

Most young computer scientists have no idea what a line of COBOL looks like, even though there are an estimated 200 billion lines of COBOL in use today, according to some accounts. In the 1960s, COBOL emerged as the first commercial programming language. Since its inception, it has been harshly criticized by the computer science community for its verbose syntax and its lack of important features. Just a few years later, rival business-oriented languages like RPG and ADPAC were making headlines as competitors to COBOL. However, policies instituted by the US Department of Defense ensured that COBOL remained the dominant force in the business world. Most critically, the DoD refused to lease or purchase any computer without a COBOL compiler.

By 1967, a language called ADPAC made the front page of the launch issue of Computerworld for its technical superiority over COBOL. One company interviewed at the time, STAT-TAB, reported that ADPAC had drastically cut programming time and compilation times. A simple technical study was conducted to highlight ADPAC’s superiority. The same program took 172 statements and 38 seconds to compile in ADPAC, but 665 statements and over an hour to compile in COBOL. ADPAC was said to have achieved these milestones by eliminating the worst features of COBOL and adding in important missing ones.

College computer science programs haven’t taught COBOL to several generations of programmers, but COBOL is still with us today and has recently been in the news. Financial institutions and government services are facing an ever-growing skills gap as the small number of programmers with expertise in COBOL exit the workforce. In fact, delays in the distribution of unemployment benefits and stimulus payments during the COVID-19 pandemic were due to issues in finding programmers to assist in updating COBOL systems. 

–By Kathleen Esfahany

Tax preparation software like TurboTax has infused automation into the otherwise burdensome process of filing a tax return. Users of such software are guided through a series of simple questions while algorithms handle calculations and filling out the appropriate forms to send to the IRS. When the form is ready, users just hit “send” and the process is complete.

Original Computerworld story

Long before these automated systems were invented, tax returns were a paper-only endeavor. The slow evolution from paper to a digitized system began in 1966 when the IRS began accepting employer tax returns on a data storage medium called magnetic tape. First used for computer data entry in 1951, magnetic tape replaced punch cards and became a primary means of data storage in the 1960s. The tapes consisted of a narrow strip of plastic film coated in magnetizable metals such as ferric oxide.

In 1966, 450 companies collectively filed several million tax records across a few hundred reels of magnetic tape, according to Computerworld magazine and the 1966 IRS Annual Report. Replacing paper filings with magnetic tape generated substantial savings for both taxpayers and the IRS. For companies, filing returns on magnetic tape saved on costs associated with processing forms, paper, and shipping. For the IRS, magnetic tapes eliminated the need to transcribe data from millions of paper documents into punch cards and subsequently magnetic tapes. These benefits led to accurate predictions that more companies would take advantage of digitized filing in subsequent years.

While the vast majority of Americans now file their tax returns electronically, the digital transformation of the tax filing system remains incomplete. Around 15 million taxpayers filed their returns on paper in 2019, contributing to a months-long backlog of millions of pieces of unopened mail at the IRS. Once the mail is eventually opened, the IRS will store the tax return data in one of their data storage media, which include modern magnetic tapes. Magnetic tape technology has improved dramatically since 1966, with storage capabilities increasing from megabytes to terabytes per reel. Many institutions besides the IRS, including Google and Microsoft, also still rely on magnetic tapes for archiving data.

–By Kathleen Esfahany

For decades, researchers have attempted to simulate the human visual system in machines, giving rise to a subfield of artificial intelligence called computer vision. At times, limited success and unrealistic expectations led to periods of harsh criticism and cuts in research funding known as “AI winters.” Over the past decade, computer vision has seen great success as the combination of increasingly powerful computing systems and massive image datasets has enabled researchers to train computers to perform with high accuracy on a multitude of vision tasks. These achievements have cemented computer vision as a cornerstone of modern innovations such as self-driving vehicles and automated checkout systems.

One of the earliest milestones in computer vision research was automating the identification of alphanumeric characters and common symbols, a task termed optical character recognition (OCR). In 1967, the U.S. company Recognition Equipment announced the development of a “Handprinting Reader Module” capable of character recognition and available for purchase with their Electronic Retina Computing Reader device. The module could recognize 40 characters (consisting of the 26 uppercase letters, the 10 numerical digits, and four special characters), surpassing all competitors, which were limited to recognizing only numbers and a few letters. 

The module worked by comparing each character to a set of stored patterns and determining the best match. The Electronic Retina system could process the identified characters into one of several different storage formats, such as a now obsolete data storage medium called punch cards. Advertisements placed by Recognition Equipment in Fortune magazine highlighted the potential benefits of using the Electronic Retina to replace entire departments of keypunch operators responsible for manually typing documents into punch cards. They claimed that airlines, credit card companies, and government institutions using the Electronic Retina had not only saved millions of dollars, but also dramatically reduced errors and data processing time. The character recognition module was priced at $150,000 and the Electronic Retina started at $750,000 — several million in today’s dollars.

Today, optical character recognition technology is commonplace and widely available for free through various websites and apps. 

–By Kathleen Esfahany

In 2020, digital storage is more affordable and accessible than ever. Consumers can purchase laptops with 8 terabytes (TB) of storage and buy an inch-long, 125 gigabyte (GB) flash drive for less than $15 on Amazon. Though now commonplace, 125 GB was enough to make headlines back in 1967. The $1 million IBM 1360 Photo-Digital Storage System (PDSS) appeared on the front page of Computerworld magazine for being the world’s first storage system capable of storing “1 trillion bits” (125 GB) of data. In 2011, an IBM computer storage system made headlines again, but this time for a record-breaking capacity of 120 petabytes – nearly 1 million times larger than the IBM 1360.

The Atomic Energy Commission (AEC) purchased the IBM 1360 PDSS for installation at the Lawrence Livermore National Laboratory (LLNL). The AEC was a federal agency established after World War II to direct the development of atomic technology. Due to concerns about insufficient protective regulations, the AEC was abolished in 1974. Prior to dissolving, the AEC established and oversaw many national laboratories, including the LLNL. From its inception, the LLNL has been a leader in scientific computing, successively acquiring the world’s most powerful computers to study challenging scientific problems. In 1967, the AEC used their record-breaking storage system to manage data used in their research simulations.

The IBM 1360 PDSS stored data on small photographic film cards called chips. Sets of 32 chips were stored in thousands of plastic boxes called cells, which were stored on trays inside three large filing units. Cells were “blown” between the filing units and reading station through a pneumatic shaft system. Data was “painted” onto chips by an electron beam. The beam created dark marks on the film, while unmarked areas formed a light mark. This facilitated a binary encoding system where a dark-light pattern represented a 0 and a light-dark pattern represented a 1.

The installation at the LLNL was the first of just five “Cypress systems” ever installed by IBM, all contracted by the federal government. By the time the installations were completed, IBM had invented and began promoting more advanced storage systems. The Cypress systems were valued by its few users. Three of the systems were used into the late 1970s.

–By Kathleen Esfahany

Many of the most successful real-world applications of automation technology have occurred within the agricultural sector. Innovations combining artificial intelligence and robotics have achieved unprecedented levels of efficiency and environmental sustainability in agricultural tasks such as irrigation, herbicide application, and crop harvesting. A USDA survey conducted in 2018 found that the majority of corn – America’s most widely produced crop – was planted using automation technology. The widespread adoption of automation in agriculture has been driven not only by the success of the technology, but also out of necessity – with a dwindling farm labor force, automation has become critical to optimizing crop yields and meeting the demands of a growing population.

Modern automated harvesting technology relies on sophisticated computer vision systems and robotic grasping tools to precisely locate and pick ripe produce. In comparison, early approaches to automated harvesting were quite rudimentary, employing mechanical methods such as forcefully shaking trees to loosen their fruit. The tree shaking method caused damage to most produce, limiting its success to hard-to-damage crops like nuts. It was also imprecise, unintentionally harvesting and wasting unripe fruit. 

In 1967, an article in Computerworld magazine highlighted how analog computer simulations could improve the precision of the tree shaking method. A research team at Rutgers University had used a 100-pound analog desktop computer called the EAI TR-20 to record data from sensors placed on fruit trees and model how applying varying amounts of force affected fruit trees. The researchers hoped the insights gained from their models could optimize the tree-shaking method to reduce the amount of unripe fruit wasted.

Though now obsolete, analog computers were once commonplace in laboratories and were important to many technological milestones, including early space travel. Analog computers represent data using continuous voltages or currents, while digital computers use discrete zeros and ones. The superior computational speed and accuracy of digital computers made analog computers obsolete after the 1960s.

Over 50 years later, mechanical tree shakers are still used to harvest nuts. Research to optimize tree shaking has evolved to include cutting-edge techniques in artificial intelligence. In a 2017 study from Washington State University, researchers trained a computer vision algorithm to detect the location of branches on a cherry tree and identify where to shake on the branch to maximize the number of fallen cherries. The algorithm was successful, resulting in over 90% of the cherries falling.

–By Kathleen Esfahany