History of Computer Graphics

Copyright (c) Susan Laflin. August 1999.

Although the earliest computers were not intended to produce graphical output, the advantages of using graphs to present and understand the results of the more complex calculations were so great that crude forms of graphical output appeared at a very early stage. Initially it was necessary to program the screens or plotters in machine code and so only a few experts were able to use them. Even when it became possible to use assembler code and split the programs into reusable subroutines, it was still not available to the majority of users. An additional problem was the scarcity of actual graphical devices and there were many users who attempted to get an acceptable picture or graph produced on the line-printer.

In the mid-1960s, FORTRAN became widely accepted as a high-level language for scientific calculations and along with this, the development of libraries of graphics subroutines became common. Some were written in FORTRAN and others were in machine or assembler code and in a suitable format to be called from FORTRAN programs. There was considerable competition between different manufacturers to produce their own "bigger and better" set of subroutines to help sell their hardware. Since there are only a limited number of operations that can be provided on graphics systems, there was inevitably a large overlap in the software, but this was concealed by the choice of different names for corresponding routines from the various different suppliers. Any attempt to transfer software from one system to another required a great deal of translation, much of it caused by the choice of nomenclature chosen by the suppliers.

This was the position in 1974, when the Special Interest Group for Computer Graphics of the ACM (Association for Computing Machinery) met for their annual summer conference. ACM is the main organisation in America for all types of Computing and their SIGGRAPH meetings and publications are a major influence on Computer Graphics throughout the world. On this occasion, the diversity of names and formats in libraries of computer graphics subroutines and the large overlap in functionality of these same libraries was a major topic for discussion. They set up a Graphics Standards Planning Committee to investigate the whole area and report back to a later conference. Since there was no sign of any other body attempting anything similar, there was no sense of urgency and so this committee took its time and the report was presented at the 1977 SIGGRAPH Conference.

Meanwhile, in 1976, IFIP (The Internation Federation of Information Processing) organised a conference at Seillac in France on "The Methodology of Computer Graphics" and this also had a great effect on current thinking. It influenced the report of the GSPC to the SIGGRAPH conference and also, in due course, led to a European initiative in standardisation.

This report was published in the proceedings for the 1977 SIGGRAPH Conference and still makes interesting reading. The first part contains an analysis of current software, and shows that most of the software was in the form of libraries of subroutines for use with FORTRAN programs. The possibility of special-purpose graphics languages was mentioned, but it was assumed that they were not sufficiently widely-used and were changing too rapidly to be suitable subjects for standardisation. To some extent, this is still the case in the early 1990s. It included tables showing the facilities offered by a number of widely-used subroutine libraries, tabulated to show how they compared with each other.

The second part was largely devoted to arguing the case for standards in Computer Graphics, but also contained proposals for a common "Core" of graphical subroutines which ought to be provided by any subroutine library. This closely resembled the intersection of all the facilities offered by individual libraries described in the first section, but was not identical in its scope. For example, it was assumed that such a library must include some three-dimensional graphics, although a few of the libraries (Ghost for example) only included two-dimensional graphics on the grounds that all output devices currently available were two-dimensional.

Much of this document is devoted to a discussion of the advantages of graphics standards and these are presented in "Computer Graphics" volume 11 number 3 for Fall 1977. The main theme is that of portability, both of software and of staff. It was realised that there was much wasted effort in rewriting the software when an installation changed its computer system. The time spent in relearning a new system when staff moved to another job, or the difficulties of passing software from one installation to another were recognised, but it was less widely accepted that this might be a bad thing. Firms did not want to encourage their staff to move elsewhere or use existing software on other manufacturer's computers. The justification for standards was fully discussed in this edition of the journal.

Meanwhile, in parallel with this work, IFIP organised a conference at Seillac in France in 1976 on "The Methodology of Computer Graphics". This is quoted as one of the references in the SIGGRAPH report. It also started people in Europe thinking about these matters, and in due course led to the development of GKS. Many of the concepts which we now take for granted were first formulated at Seillac. Some of them follow here.

Methodology of Computer Graphics

a) There shall be separate input and output functions.

It is very much easier to read a program with calls to subroutines READ and PLOT than to read low-level coding where instruction No.165 is a peripheral transfer and different settings of parameters provide different forms of input and output. Most programmers had now reached the stage of using high-level routines with meaningful names and were determined never to return to their earlier difficulties. In addition, nobody wanted to rewrite routines involving both input and output if, for example, the installation received a new plotter.

b) As far as possible, all devices shall be programmed in the same way.

Anyone who has ever been caught with a program awaiting an impossible situation such as input from a plotter will be painfully aware that all devices are not interchangeable. However, it is very useful to be able to write a program and know it will run on a variety of different devices. In particular, you can save yourself much embarrassment and wasted paper by examining your graph on the screen of a terminal before sending it to the plotter.

c) Coordinate Systems

The usual arrangement shall be a "world coordinate system", in which the problem is specified, a "device coordinate system", describing the screen or plotter on which the graph is drawn, and a "viewing operation" to transform from one to the other.
Although this may seem unnecessary in the very simple cases, as problems become more complex this method of keeping the picture distinct from the output makes it possible to handle very complicated designs quite successfully.

Raster Terminals

Between 1977 and 1979, a sudden improvement in hardware occurred which showed up one of the weaknesses of any standard, namely that unforeseen developments are by definition non-standard. This particular example was the proliferation of raster graphics terminals with the resulting drop in cost and at the same time an improvement in reliability. Most of the terminals you will meet today are raster terminals, although they can be programmed to conceal this fact.

The previous type of terminal was the vector refresh display where lines had to be redrawn under the command of the program several times a second, otherwise they would start to fade and would flicker when they were redrawn at their original brightness. This automatically put a limit on the complexity of the drawing because if you tried to add too many lines to your picture, the program couldn't get round in time and the whole drawing would flicker, making it unpleasant to view.

The raster terminal on the other hand contains a frame buffer in which is stored a copy of the screen. The screen is automatically refreshed from this buffer and takes exactly the same length of time whether the screen is full of lines or nearly empty. The programmer can forget about having to redraw the picture as any graphics output sent to such a terminal is copied into the buffer and this change is automatically transferred to the screen.

If you are using a black and white terminal (or any other monochrome screen) then the frame buffer has only one plane. Each bit in the buffer may have either the value 0 (off) or 1 (on) and corresponds to one "picture-cell" or "pixel" on the screen. A row of 1's in the buffer appears as a line on the screen. If we have a terminal with 4 possible colours, then the frame buffer must have two planes and so possible values for the pixels are 00, 01, 10 or 11. These are interpreted as dots of the appropriate colour on the screen.

This new type of terminal and the operations associated with it were completely outside the scope of the first edition of the Core. In the summer of 1979, the Graphics Standards Planning Committee presented a revised version of the Core, but it was not sufficiently revised to incorporate raster terminals in the body of the text. Instead, this was tacked on the end as a lengthy appendix. This was published as a special edition of Computer Graphics for Fall 1979. Having published this, they felt their duty was done and they waited for users and manufacturers to produce the appropriate software. Many American manufacturers have produced their own version of the Core which implements those parts of it which suit their software.

Eurographics and GKS

Meanwhile the Europeans, and especially the Germans, were not remaining idle. In 1980, EUROGRAPHICS (The European Association for Computer Graphics) was formed and in September of that year, held their first conference at Geneva. Following discussions at that conference, their proposal for GKS as a two-dimensional standard for Computer Graphics was submitted to ISO. If the SIGGRAPH Core proposals had been submitted to ISO in the previous year then it is probable that ISO would have considered a two-dimensional standard unnecessary but SIGGRAPH had omitted to do this. The submission of the GKS proposal to ISO was followed by lengthy discussions with all interested parties and some of the ideas of the Core, especially those relating to forms of text output, were incorporated into GKS before it was published in 1982 as Draft International Standard 7942.

G.K.S. was the first attempt to design an international standard for computer graphics. The original German title becomes "Graphical Kernel System" when translated into English or American, which luckily preserves the same initials. It is a two-dimensional system (since most if not all graphics output appears on two-dimensional screens or plotter paper) and assumes that software for the more complex three-dimensional pictures will be designed to surround the kernel. It has received approval from the International Standards Organisation (I.S.O.) and is fully documented. Software written to run on one certified implementation of GKS should run without alteration on any other.

Discussions continued on minor details and it seems likely that those who had put effort into developing the Core were not happy with the emergence of GKS. SIGGRAPH organised a vote of its members to decide whether ISO or SIGGRAPH should be the appropriate authority to decide on standards for Graphics in America and I am pleased to report that the result was a large majority in favour of a single body, ISO, to decide on all standards. In February 1984, SIGGRAPH published a special issue describing GKS which was sent to all its members. GKS has now become an international standard and many manufacturers have produced implementations of it, although not all these implementations have been validated and certified by the appropriate authority and uncertified implementations are likely to contain their own ideas on some points.