In the beginning there was only esoteric binary code. Then a simple evolutionary step forward to readable character text transformed human accessibility to the digital world.
In today’s era of the ubiquitous Internet, smart devices and social media, the way we actually connect with our digital world seems to be obvious to a general population that has grown up with it as a “natural” extension of human communications. To a young child, the computer itself has become largely invisible. Now a seven year old girl offers me advice on how to organize my phone display. And I listen.
Some historical perspective is in order.
In 1963 Bill English and Douglas Engelbart at the Stanford Research Institute in Menlo Park, California invented the “mouse” and mouse-driven cursor as a computer interface to select text for word processing. In 1971 English moved to Xerox’s famous PARC research centre, followed by several other SRI people, where in 1973 the Alto became the first computer to use a desktop graphical user interface (GUI) with windows. In 1979 at Apple, former Xerox PARC people continued to develop these ideas for Apple’s Lisa and Macintosh personal computers.
In 1980 Microsoft had a contract with IBM to develop an operating system for IBM’s first personal computer. A year later Microsoft bought the rights for 86-DOS from Seattle Software Products based on CP/M, the most common micro operating system at the time, without telling SCP about the contract with IBM. (SCP later agreed to a $M settlement with Microsoft.) Development of 86-DOS led to MS-DOS and the IBM Personal Computer was then released with MS-DOS as the operating system.
In 1981, working as a programmer and computer training coordinator at Amoco Canada Petroleum in Calgary, I was asked to recommend a desktop system as a potential new standard for the company. The ongoing GUI development at Apple did not figure prominently in my analysis. There was a tendency in some circles at the time to dismiss graphics as something for the artistic crowd rather than for hard core business applications; on the other hand, “nobody ever got fired for recommending IBM”.
In 1983 the Apple Lisa featured a graphical interface. The Macintosh, released in 1984 was the first commercially successful product to use a window interface. Microsoft Windows 1.0, a GUI for the MS-DOS operating system, was released in 1985.
Meanwhile, since 1969-1970, the Unix operating system developed at AT&T Bell Labs was eventually ported to a wide variety of hardware platforms. The windowing system in the Unix world, the X Window System, released in the mid-1980s, was developed at the Massachusetts Institute of Technology. Due to a free open source license for X, it became the standard graphical interface for virtually all Unix, Linux and other Unix-like operating systems, with the exception of macOS and Android.
We’re at critical mass now with rapid advances in other digital interfaces to mark a new turning point in human-computer communications. In recent years, voice recognition and voice synthesis, natural language translation and autonomous robotic control have exploded on the digital scene. The underlying technologies for these profound advances have shifted – from programming computers with rigid and often imperfect rules – to focus on training computer networks that can adapt to new information by learning.
Where will the new technologies take us? This question makes many people nervous. Predicting the future is always a roll of the dice, so I can’t answer that with any certainty, but I’m not particularly worried. As a computer scientist, I see wonderful opportunities ahead; as an amateur student of human nature, I see a coin toss on our chances for survival. Either way, our future depends on our ability to adapt to tidal waves of change. Business as usual is not an option.
P.S.: Wayne wrote his first computer program in high school in January, 1969 for an IBM 360 model 30. Unfortunately, from that point on through his life, he learned very little that was useful and not related to computers.