The History of Computer Animation

The History of Computer Animation

7 min read

If you want to get an understanding of the importance, scale and ubiquity of computer animation, all you need to do is take a look at the top 50 highest-grossing films of all time. The number one spot is taken by Avatar, a film that relies so much on computer animation that its development was delayed by years in order to allow technology to catch up. There are numerous fully computer-animated movies in the list – Frozen, Toy Story, The Lion King, Minions, and loads more. In fact, we think you would be hard pushed to find a single movie in the entire list that doesn’t involve some type of computer-generated imagery.

It’s hard to imagine a time before computer animation, and without it many modern films would be unrecognizable if they would even exist at all, and the video game industry would be a very different place. How did this technique become so huge and important? Let us take you on a journey to find out.

Hold Up, What is Computer Animation?

First off, we need to pin down what is meant by computer animation. Broadly speaking, computer animation is a general term that refers to the art of creating animated images through the use of computers and digital tools. It follows the same basic principle as other kinds of animation, such as stop-motion animation and illustrated 2D animation, where very small steps are made on a frame-by-frame basis to create the illusion of movement.

In the case of computer animation, an image is displayed on a computer monitor before being rapidly replaced by a new image that is almost identical, but just a very tiny step forward in time. This process takes place at a rate of 24, 25 or 30 frames per second and as a result the human eye perceives it as being continuous movement.

The way computer animation works is different depending on whether it is 3D or 2D. In 3D animation objects are digitally produced and then placed on conceptual frameworks or ‘skeletons’. In the animation of 2D illustrations, separate objects and separate transparent layers are used in the animation and they can be used with or without a virtual skeleton.

Whether it is 2D or 3D, the technique by which the animation is produced is the same. Take the example of a figure animation: the placement of the limbs and facial expressions of a character are moved on what are called key frames. The differences in appearance between the key frames is then automatically worked out by the computer, and it fills in the gaps, so to speak. This key frame illustration process is known as tweening or morphing.

The final stage in the computer animation process is rendering. For 3D animations, the rendering process takes place after all of the modeling is complete, and for 2D, the key frame animation process is the rendering and tweened frames are rendered as needed.

Computer animation has massive advantages over other techniques. A key benefit is that it requires much less manpower than stop-motion or conventional hand-drawn cartoon animation. Rather than painstakingly painting every single frame, or moving 3D figures on a step-by-step basis, computing power can be utilized to achieve both of these things much more quickly, which is idea for animation studios. This has led computer animation to be regarded as the digital successor to stop-motion and hand-drawn animation, and its now far and away the most popular animation technique.

When the Ball Started Rolling

There were some experiments in computer graphic animations during the 1940s and 1950s, with the most notable coming from John Whitney in 1958 when he created what is credited as the first ever computer animation. Collaborating with legendary graphic designer Saul Bass, Whitney created the title sequence for Alfred Hitchcock’s Vertigo using an analog computer. Whitney is now considered to be one of the fathers of computer animation and he has earned a spot in animation history.

The real acceleration of modern computer animation came after the advent of the digital computers in the 1960s. Universities like the University of Ohio and the University of Utah established departments to support computer animation, and other institutions like the National Film Board of Canada began experimenting with the new discipline and many shared the goal of creating an animation program. These early explorations in computer animation were mainly directed towards scientific, research, and engineering purposes. One of the first ever computer-generated films came out of Bell Laboratories in 1963. It had the catchy title A Two Gyro Gravity Gradient Attitude Control System and showed a box with edge lines that represented a satellite orbiting around a sphere that represented the earth.

In 1973, computer animation made it out of the lab and onto the big screen when digital image processing was used in a feature film for the first time. Director Michael Crichton enlisted John Whitney Jr (son of John Whitney) and Gary Demos of Information International, Inc. for some of the scenes in Westworld. To reflect the point of view of the android in the movie, they digitally processed the film to make it appear pixelated.

Soon after, wireframes started to appear in films such as George Lucas’ Star Wars and Ridley Scott’s Alien. George Lucas was very interested in pursuing advances in CGI around this time. In 1979 he took some of the top talent from the highly respected Computer Graphics Laboratory at the New York Institute of Technology and set up his own special effects division. This division was later split off to form an independent division using funding from Apple’s Steve Jobs. And what was this new division called? Pixar.

Throughout the 1970s and early 1980s, technological advances continued to be made, with the introduction of the framebuffer. By the 1980s this new technology was pushing digital animation into new places.

More Than Acceptable in the 80s

Advances in computer power combined with an increase in affordability and new developments in commercial software meant that throughout the 1980s meant that the quality and prevalence of computer animation and computer-generated imaging kept increasing.

This was the era in which solid 3D CGI was improved and developed to the point where it could be used in a movie for the first time. Walt Disney’s Tron was released in 1982, and it is now regarded as a real milestone in the movie industry, with its use of solid 3D CGI a first for a film representing a giant step forward. The vehicles and digital terrains of the film are all produced by CGI, and showed what could be achieved with the technology. From here on in, we see CGI being used in more and more movies, right up to the present day.

Morphing or tweening also improved dramatically in the 80s. Up to this point, morphing was mainly used with vector animations, but by the early 1980s the technology was enabling morphing to happen between photographic images to create photorealistic animation. The first public example of this in action came from the New York Institute of Technology in 1982 when at a conference Tom Brigham from the institute presented a video sequence of a woman morphing into a lynx.

By 1988 morphing had made its way onto the big screen in Ron Howard’s movie Willow, and it was also used to great effect in Terminator 2: Judgment Day in 1991. The technique probably reached the peak of its trend when Michael Jackson used it in his music video for Black or White. The video premiered simultaneously in 27 different countries to reach an audience of 500 million people and brought morphing to the forefront of public consciousness. Computer animation was heading for the big time.

Breaking on Through


The 1990s was the decade in which computer animation really started to take over and become a significant part of the film and TV industry. The CGI and morphing used in Terminator 2: Judgment Day was regarded as the most substantial use of CGI in a movie since Tron way back in 1982. The 90s also saw the first 2D animated movies to be produced using only the Computer Animated Production System (CAPS).

In 1990 Walt Disney released The Rescuers Down Under, which was created using just the CAPS system. Walt Disney then followed up with Beauty and the Beast in 1991. It was once again made using only CAPS, but it took it even further and incorporated 3D CGI effects too. The movie was a huge box office success and also became the first animated film to be nominated for an Oscar for Best Picture.

CGI is used in an increasing number of movies and TV shows, like Jurassic Park, Babylon V and The Lion King. Then in 1995, another huge milestone was reached – Disney-Pixar released the first ever fully computer-animated feature film, Toy Story. A runaway success, Toy Story went on to become one of the highest-grossing films of all time, and revealed the true potential of computer animation and 3D characters. Now, computer animation is far and away the most prevalent type of animated film.

Another big development in computer animation in the 1990s came from improvements in motion capture. In short, motion capture records the movements of people or external objects. For human motion capture of people, a person wears a series of markers placed near each joint to track the movement of the markers. Often, a special suit with markers will be worn. Motion capture can also be used to record details such as facial expression and hand movement. The data that is captured from the movements is then mapped to a 3D model and new graphic and animation elements can be placed over it.

Motion capture’s initial application was as a biomechanics research tool. It was then first used for more commercial purposes in video game production in the late 1980s, before being adopted by the film industry in late 1990s. A notable example of this time was the use of motion capture to create the Jar-Jar Binks character animation in Star Wars: Episode I – The Phantom Menace. A lot of people had a strong dislike for this character, so perhaps it wasn’t the best use of the new computer animation technique after all.

One of the biggest breakthroughs in motion capture was Andy Serkis’ performance as Gollum in The Lord of the Rings: The Two Towers. This was the first time that a feature film used a real-time motion system. The technique was able to transform nuances of Serkis’ performance onto the facial animation of Gollum and gave the CGI a real human character.

Computer Animation Today

Nowadays, computer animation and computer-generated images are the absolute norm in the television, film, and video game industries. Incremental improvements in technology have increased the capabilities of CGI while simultaneously making it more accessible. Increased processing power and better software means that high-quality computer animation is no longer confined to major players with powerful workstation computers.

Many home computers are now capable of producing computer animations that would have previously required giant dedicated rendering machines. This has created new opportunities for individuals and companies to experiment with animation. If you want to learn how to animate, the barrier to entry has never been lower and the potential has never been greater. You know longer need an animation degree to get started in the field, you can start right now.

The reason why CGI and computer animation have become so popular is because it makes almost anything possible – the only limit is your imagination.

And the same goes for illustrators and graphic designers using Vectornator. Thanks to our amazing tools, there’s really no limit to what you can design!

What do you think? Discuss it in our Forum
We’d love to hear what resonates with you, and what your’re excited about for the future discuss the Topic on Forum!
Go to forum  
Press ESC to close.