[an error occurred while processing this directive]
By Brian McKernan
Slightly more than a decade ago, computer-generated imaging (CGI) was the domain of university and military research labs and the design departments of the aerospace and automotive industries. Today CGI is used to generate such mainstream entertainment as the Toy Story films, photorealistic digital effects for TV and movies, and interactive content for the Internet and video games. In a relatively short time span, CGI went from being a difficult and expensive discipline dominated by scientists to an increasingly powerful--and inexpensive--creative tool for generating whatever moving image the mind can conceive. And CGI continues to advance, with each new Titanic-sized project adding additional capabilities to the field's repertoire of software capabilities.
As high-profile as its relatively recent uses in entertainment have been, CGI's applications in industrial design, medical imaging, and scientific visualization have been even more profound. In such areas CGI has been an effective tool in visualizing advanced geometrical objects too complex to render in any other fashion. One excellent example is the work by IBM researcher Benoit Mandelbrot in fractal geometry. His "Mandelbrot set" includes a computer-generated pattern that has been described as one of the most remarkable discoveries in the history of mathematics. Or, put another way, as "the thumbprint of God."
CGI, in this sense, augments the mind's visual abilities. And while it should be no surprise that CGI--a product of advanced mathematics--should prove such a boon to science and high-tech entertainment, it also can communicate data effectively in its simplest forms: static business graphics of pie charts and bar graphs. In television, CGI-based systems are used to generate sports scores and animate weather maps on the nightly news, and make arresting, dynamic images of animated logos or photorealistic aliens. CGI images can be produced at NTSC's 525-line 4:3 rectangle or at 2,000 (or more) lines in a 16:9 shape suitable for HDTV or motion pictures.
CGI systems are at the leading edge of digital teleproduction, increasingly offering not only multiple graphics applications but also editing functions for images and sound. The ability of computers to seamlessly process and manipulate digital data representing different kinds of visual and aural content means that the same computer can run multiple software packages, passing moving-image and audio files back and forth among them. The Softimage Digital Studio is a leading-edge example of the increasing trend toward all-in-one systems.
It is generally agreed that the field of CGI began in 1962 with a young graduate student at the Massachusetts Institute of Technology named Ivan Sutherland. His doctoral thesis (titled "Sketchpad: A Man-Machine Graphics Communication System") included a way for people who weren't programmers to "draw" simple shapes on a picture tube connected to a computer. Previous to Sketchpad, "computer graphics" entailed writing lines of programming code that dictated paper print-outs of crude patterns of X's and O's that might look like something from several feet away.
Sketchpad made history by enabling anyone to use a light pen and a row of buttons to create basic images. According to the late Robert Rivlin's book The Algorithmic Image (Microsoft Press, 1986), "except for the addition of color and a few minor details concerning how the graphics processing is accomplished, the 1963 version of Sketchpad has remained virtually unchanged in 95 percent of the graphics programs available today, including those that run on home computers."
In 1964 Sutherland teamed up with Dr. David Evans at the University of Utah to develop the first academic computer graphics department. CGI advanced through the years as a part of computer science, evolving in the labs of universities, corporations, and governments. A long series of innovations gradually improved the technology, including: the display processor unit, for converting computer commands and relaying them to the picture tube; the frame buffer, a form of digital memory that improved upon that principle; and the storage-refresh raster display, which made computer-graphic screens practical and affordable. These screens use CRTs (cathode-ray tubes) that divide images into picture elements, or "pixels," the basic unit of computer graphics and digital television. Digital, random access memory (RAM) determines the number of pixels on the screen, their location, color, etc. Each pixel is assigned a particular number of memory "bits" in a process known as bit-mapping. The more bits the greater each pixel's potential color combinations. "Bit depth" is a term that refers to the number of these bits. The greater the bit depth, the more color levels possible. In a 24-bit system, the computer provides for eight bits each of red, green, and blue. It is from these basic colors that all other hues are created in CGI. A 30-bit system provides for 10 bits per basic color; a 36-bit system, 12. More color means more shading, detail, and-in the hands of a skilled artist-realism.
Hand-in-hand with CGI technology improvements came ever-increasing advances in writing the programming code necessary to make graphic objects and present them with three-dimensional perspective. This long process of discovery went from inventing ways of drawing wire-frame "skeletons" of objects, to developing algorithms for what's known as polygon rendering to give them surfaces, to animating them in ways the human eye finds appealing (see figure 2). As with every field of human endeavor, CGI researchers built upon past accomplishments and knowledge to continually refine the technology in a process ongoing to this very day. Drawing upon mathematics, physics, and other fields that measure and describe the physical world, the science of CGI continually improves the ways that images can be synthesized and achieve the "look and feel" of matter and energy responding to the universe's natural laws.
Alvy Ray Smith, another Xerox PARC researcher, made significant software-design contributions to Shoup's paint program. Smith went on to work at the New York Institute of Technology and NASA's Jet Propulsion Laboratory, two hotbeds of CGI research in the 1970s. In 1980 Smith and fellow CGI pioneer Ed Catmull were hired by Star Wars producer George Lucas to run what later became Pixar, a company formed to advance the art and science of CGI in motion picture production. While these and many other pioneers worked to improve the ways in which computers could make pictures, they were also finding applications for the use of such pictures, and writing the necessary software to generate them. At the same time, computer technology itself was undergoing a revolution. Processing power continued to rise and costs dropped, an ongoing situation described in 1965 in what's become known as Moore's Law, named after Gordon E. Moore, physicist, co-founder, and now chairman emeritus of Intel Corp.
Moore noticed that manufacturers had been able to double the number of circuits on a chip every year, causing exponential leaps in power over time. That leap in power meant the cost per circuit was cut in half each time. Experts say that microprocessor power is now quadrupling every three years.
Each new software application, meanwhile, built upon and improved what had gone before. The year 1974 saw the birth of the Association of Computing Machinery (ACM)'s Special Interest Group on Computer Graphics (SIGGRAPH), which went on to play an instrumental role in providing support for this new science. SIGGRAPH also organized conferences and provided a valuable information-sharing environment. This is even more true today.
In the past 10 years, the computers necessary to create CGI have evolved from large mainframes to large workstations to custom-built PCs to off-the-shelf Macintoshes and Windows computers optimized for video. The high end of CGI technology is still a place where millions of dollars are easily spent, and although high-end capabilities invariably gravitate down toward affordable systems, the high end is continually developing its own special capabilities. Similar technological forces are at work across the range of CGI systems: ever-faster and more powerful microprocessors, increasing storage-system capacities, and advances in software. CGI systems run the gamut from turnkey systems that package software with a particular brand of computer to shrink-wrapped programs intended for personal computers.
In addition to a computer, monitors, keyboard, and some form of data storage, nearly all CGI systems also employ a mouse, graphics tablet, and pen. The tablets usually work by means of an embedded wire grid, which senses the location of the pen and relays it to the computer; the "drawing" is displayed on a computer monitor. CGI systems typically use some sort of graphical user interface, usually running under the UNIX, Macintosh, or Windows operating systems. Quantel, a leading maker of CGI systems, is an exception, still employing its own proprietary computer and operating system optimized for CGI tasks-and doing so with great success.
CGI software exists today to produce everything from animations that look like classic hand-drawn cartoons to photorealistic images nearly indistinguishable from reality. CGI is frequently combined with live action to create convincing special effects and fantastic eye-catching environments in movies. Major applications of CGI technology include character generation, two-dimensional electronic paint and animation, three-dimensional modeling and animation, and even virtual sets and actors.
Although character generators perform a relatively basic task in the hierarchy of CGI systems, they represent advanced technology essential to news and live television production. Character generators connect a keyboard to a frame buffer, the output of which can be displayed as letters and numbers that can be colored, sized, and keyed over another video source. A process known as antialiasing keeps curved characters from getting a "sawtooth" edge as they cross video scanlines.
The most common use of a character generator is to superimpose a name under a talking head or to add a credit roll to the end of a program. Character generator systems can offer limited digital effects capabilities, including the ability to rotate, spin, extrude or otherwise manipulate type. (Like CGI systems, digital video effects equipment employs a digital memory to store video. In this case, however, the video is input from a live or recorded source and then manipulated and output by the DVE's computer according to whatever capabilities the system's operator decides to use-a page turn, a mirror-smash, etc.)
Depending on the brand and model, some character generators also offer limited paint and animation functions (see below), as well as a host of other features that can include multiple fonts, spell checking, storage of multiple pages, "canned" background textures, variable-speed movement of characters, networkability with other video-production devices, and more. Chyron, a leading maker of character generators, is often used inappropriately as the generic term for this kind of equipment. Chyron character generators are designed to provide speedy operation and generous storage capacity so text can be created or updated quickly onscreen in news and sports applications, and many "pages" of text can be called up in an instant. Character generation is but one aspect of CGI, the bulk of which is performed not in on-air applications but in post production.
A picture is worth 100,000 words, said Confucius, which is why paint systems are essential to communicating news. Devices such as Quantel's Paintbox are used to take still images from a type of computer memory known as a video still store and provide a graphics designer with the tools to color, size, add type, and otherwise configure graphics for display behind news readers. Over-the-shoulder graphics are a familiar part of television news, enhancing reports with imagery and often text (rape, robbery or murder). A limited form of animation known as color-cycling can be applied to give the appearance of flickering flames and other repetitive motion.
Paint systems are typically used for more elaborate tasks in post production than in broadcast news. These tasks can include such processes as rotoscoping (frame-by-frame drawing on top of video or film footage to alter the preexisting image), wire removal, matting, and other techniques. Such work can be performed manually or in automated fashion.
The art of wire removal has changed dramatically with the use of digital technology. Years ago, wires to support or fly actors or props were made as invisible to the camera as possible to help the illusion of flight. Today, wires are colored bright noticeable colors so that the computer artist can easily see them and remove them.
Three-dimensional modeling and animation in video refers to the display on a two-dimensional computer screen of objects that--when rotated or otherwise manipulated--give the appearance of being 3D. Stereo-optical 3D, which often requires viewers to wear special glasses, can be created with 3D CGI (as it can with probably any other kind of image-creation technique), but beyond that the two kinds of "3D" refer to two different types of imaging.
Animators use their keyboard, mouse, and pen-and-tablet interface tools to tell the computer the shape of these mathematical models, some of which may be chosen from a library of shapes, such as those offered by Viewpoint Data Labs. Sometimes an actual 3D object--a model, statue, or even a real person--is used, its topography and dimensions communicated to the computer via a special input device. These can include the Immersion Corp. MicroScribe-3D for small objects or the Cyberware laser-scanning system for large ones such as human beings.
Once all the contours of the object have been defined by whatever form of modeling is used (constructive, procedural and solids modeling are three different approaches), animation of the wire-frame object can begin. Animators typically storyboard the action of the scene they intend to create for effective planning. Motion paths can be determined and "camera" positions (the point of view that viewers of the finished animation will have) and movements specified. "Key frames" are specific, periodic points at which certain actions will occur; the computer can then interpolate and generate the "in-between" frames between key frames.
When the animation process is complete, the object or objects being animated can be given a surface via rendering, which assigns whatever color, shading and other surface attributes the animator desires (metallic, matte, textured, etc.). "Mapping" refers to the computationally intensive process by which the computer calculates and "draws" in the "skin" or surface of the object(s) in the animation. Again, it's not as if the computer is taking over from the human designer; the human designer has instructed the computer on what the animation should look like. The human designer has determined where the "light" source is coming from in the scene, what the surface should look like, what the motion path should be, the colors, the quality of the "camera's" lens, etc. The computer then relies upon its rendering program to carry out these directions, mathematically calculating and drawing the finished action for each successive frame of the animation. The final animation can be output to videotape, motion picture film, or other storage media, such as a digital data recorder.
Compositing and Convergence
One of the most important applications of CGI today is in compositing, where graphic elements can be seamlessly merged with live-action imagery or with other computer-generated footage. Movies featuring photorealistic dinosaurs, ocean liners and monster insects are all examples of high-end compositing work. Images from live-action footage--most often shot on film--are scanned and turned into digital data then imported into a high-end digital re-touch system such as Quantel's Domino or Discreet's Inferno. That film imagery--now existing as zeros and ones--can be manipulated and combined with CGI to complement it in whatever fashion the script calls for. When the sequence is completed to the director's satisfaction, this data is output to a 35mm film recorder for incorporation into the final motion picture negative.
Even more common is the use of compositing technology in television commercials, where the end product goes out to D1 or D5 videotape, and the end result may or may not be used to make fantastic images. This CGI may just as often consist of pleasing designs or pictures that would have been too difficult to create using any other means. Examples can be as different as a simple promo showing multiple, moving layers of 2D type choreographed against geometric shapes and film clips, to a gasoline commercial showing a filmed image of a car with a superimposed computer-generated 3D "x-ray" of its engine running. Specific effects may be applied via software in the edit room or compositing suite.
The production of a composited scene may employ any number of other CGI functions, some of which are familiar--such as paint---and others that are the result of new, leading-edge software capable of innovative effects. Increasingly, compositing, effects and other CGI functions are being integrated within editing systems. Any graphics manufacturer, for instance, can interface their graphics, paint or animation system into any of the Avid Media Composer nonlinear editing systems. Such systems offer pull-down menus for these functions right inside the edit menu. An editor can access 3D animations while in an edit program or take a clip or series of clips and send it out as a graphics file.
Nonlinear edit systems employ multiple "tracks" of imagery as a means of organizing the sequence of clips. Many such tracks are for computer graphics. Even a simple logo is seldom simple anymore. It's not uncommon to composite many different paint or character-generator segments, with each character existing as its own computer-graphic layer complete with shadows, different light sources and other visual attributes. Designers can, for instance, "fly-in" a word, object or other element so that it has multiple shadow layers. Avid Technology's Media Composers can import garbage mattes, which is a graphics function. The idea is to make edit systems more efficient so that if an editor imported a paint-system graphic and the keyhole wasn't perfect, the editor could touch it up. Discreet's Fire system has a paint package called Retouch.
Conversely, more and more CGI systems are offering editing functions. In many instances the computer on which the editing and CGI functions are performed is the same, and clips can be easily moved back and forth among applications. This convergence is ongoing and is tending to blur the distinction between CGI and editing, instead combining the two as co-equal functions of post production. Although very specific skills are necessary to cut moving images so as to maintain pace and rhythm, editors today are also able--if they choose--to perform paint and other CGI tasks as part of their work. Likewise, graphic design is its own discipline, requiring special talents and experience. There are, however, instances where graphics designers need access to editing tools to make them more efficient. High-end graphics systems makers such as Quantel and Discreet have introduced CGI systems with editing capabilities to address these creatives.
Much of today's CGI work is done on personal computers and small workstations that outperform the largest mainframes of a decade ago. As mentioned earlier, the three principal types of computers--or "platforms"--used are the Apple Macintosh, IBM-compatible PCs and the line of workstations made by SGI. Key to these platforms are their respective operating systems, specialized software that manages the computer's hardware components and the onscreen commands (the "interface") by which the computer is used. Operating systems provide and manage access to different kinds of software "applications" (editing, paint, character generation and compositing) and to the files created within them. Most importantly, operating systems provide a uniform environment that programmers can write to, and thus create different kinds of software applications that can all be used by the same computer. Leading operating systems include that used by Macintosh; IBM-compatible PCs used for CGI usually employ either Microsoft's Windows 95 or Windows NT operating systems; SGI computers use a form of the UNIX operating system known as IRIX.
Although opinions vary, the Macintosh is regarded by many as the leading image-processing platform and the one whose operating system's foundation code has been best exploited by software developers. Many interface commands are the same from program to program, and a Macintosh user can have multiple programs open simultaneously. Users can pass images or "film strips" from program to program. For instance, someone using the Media 100 nonlinear video editing application can simultaneously open Adobe Photoshop for paint, Adobe After Effects for effects, Electric Image for 3D, and Puffin Software's Commotion for rotoscoping, and move files seamlessly among these applications. Whether or not the Macintosh will continue to play a leading role in CGI and other video functions will depend on how successfully Apple Computer and its technology partners fend off competition from Windows NT computers, which are less expensive and increasingly able to do anything a Macintosh can.
SGI workstations, meanwhile, tend to lack homogenous interfaces and the ability to move files across applications as effectively as on the Macintosh. On the other hand, software for SGI computers typically offers greater bit-depth (up to 12 bits per color) and computational power. This increased bit depth provides greater color spectrum than the eight bits per color afforded by Macintosh software (although the computer itself can handle much more bit-depth); greater bit-depth enhances realism when generating the kind of high-resolution CGI necessary for motion picture production. The computational and data-storage capacities of SGIs' Octane and Origin computer/server combinations and the power of the big Onyx2 workstations makes these systems favored tools for high-resolution motion picture CGI and effects work.
Although the barriers between editing and CGI are becoming increasingly blurry, having one person do all the work on one system is not necessarily desirable for all video facilities. A small "project studio" business may find it expedient to have one or two people performing all tasks on a single computer, but high-end studios typically use the kinds of talent who prefer to specialize in their own particular area. And on top of that there's no system today that does everything really well. Why have an editor do graphics when you can have a graphics designer doing it faster and better on a less-expensive system? In such environments, workgroups speed efficiency.
Workgroups use Ethernet or other high-speed computer networking systems to link editing, graphics and other functions in such a way that different people can work on the same body of material simultaneously. In a workgroup, editing, graphics compositing, 3D animation, paint and audio talent can all access the same footage and work in a timely manner. Workgroup editing enables users to expedite the adjustments between, say, a paint workstation, an animation workstation, and an edit system.
Virtual Sets and Actors
Everyone is familiar with the use of chromakeys in television: weathermen superimposed over maps, local used-car dealers keyed over a comical background. A new technology know as the virtual set, however, expands on chromakeying by adding the dimension of foreground/background--related movement. Virtual sets place actors in some sort of environment and allow them to walk around that environment in realistic fashion. In reality, however, the entire set is really a photorealistic 3D CGI model. This very high-end technology is being promoted as a way to turn any blue- (or green-) screen stage into any kind of environment you can create with CGI-from the Old West to outer space. In any case, a virtual set is--ideally--cheaper than building a real one.
Virtual set installations start with a CGI model of the set created using such 3D software as Alias|Wavefront, or Softimage. That model is then stored in a powerful computer (such as a SGI Onyx) capable of rendering it in realtime. A moveable video camera (or cameras) then captures images of the actors' performance on the blue-screen set. Using either motion sensors, a motion-control head, or pattern-recognition technology, the camera's position is fed into the computer, which then "draws" (renders) the background as it should be seen from that particular point of view. The actor(s), and the CGI are then combined, and a virtual environment is created--hopefully one that convinces viewers that there's no trick involved.
CGI objects can also be introduced into the scene; with proper rehearsal and blocking it can be made to look as if the actor is interacting with these objects (a computer-generated bird, for instance, flying through the scene). Camera movement is what makes a virtual set seem real; the computer generates a background image that corresponds to what the camera's (audience's) point of view should look like from any given position. And the computer is powerful enough to re-draw the scene in realtime as the actor(s) move within the "set."
Although several companies sell virtual set systems, this is still a technology in development and virtual sets tend to be extremely expensive and balky to operate. But as the technology matures it's entirely possible that Hollywood carpenters may have something to fear from CGI artists.
Similar technology is used to electronically replace advertisements on stadium billboards in televised sporting events and to place sponsor logos on the playing field but not over the players.
Virtual actors may scare Hollywood actors even more. As each new CGI software revision further advances mankind's ability to synthesize images, we come one step closer to being able to create photorealistic images that will be indistinguishable from real human beings. Each year, the SIGGRAPH film and video show spotlights the best CGI clips, and synthetic humans are inevitably among the featured attractions; each year these images are just a tad more convincing.
A decade ago critics contended that CGI actors could never display sufficiently convincing facial expressions or physical movement to pass for real or even earn an audience's empathy. Total realism is still to be achieved, but it comes closer every year (as we've seen recently in Toy Story 2 and Stuart Little). As it does, software engineers learn more and more about the complexity of human movement and facial expression. Humans are conditioned from birth to "read" faces; the better computer-generated faces get, the more bizarre they seem to us. Unless and until total realism is achieved, the brain will know that something is amiss, so central to our psyche is the perception and interpretation of facial expressions. CGI seems, however, to be gradually closing-in on the goal of total realism. This could be bad news for movie and TV actors but good news for producers.
Assuming totally convincing virtual actors (also called "synthespians," for "synthetic thespians") can be created, their advantages will be many: They don't age, don't get sick or need vacations, aren't potential subjects of personal scandal, and they don't demand raises. A synthespian can be crafted to please a selected demographic profile and combine all the best traits of the most beloved movie actors. Digital compositing technology's cut-and-paste capabilities have already brought forth synthespians from the past--sort of. Deceased stars such as John Wayne, Marilyn Monroe and Fred Astaire have been digitally lifted from various film performances and composited into new live-action commercials hawking everything from vacuum cleaners to beer. Perhaps it's just a matter of time before Moore's Law brings forth computers so advanced that Hollywood sound stages will be sold off for valuable real estate and major studios will operate in small rooms in whatever locations creative CGI talent wishes to reside.
Buying a CGI System
As with every other area of digital teleproduction, consideration of which CGI system is most appropriate for a given facility's needs depends upon the tasks that will be performed on it. The word system is a bit of a misnomer; software and hardware typically come from separate vendors, although some software companies also sell off-the-shelf computers they've optimized for the task. In an age when processing power, software sophistication and operating systems are continually being revised, it's a given that CGI systems need updating on a regular basis. Good arguments exist for both the so-called "open-platform" systems (those based on personal computers or workstations) and for the "dedicated" systems that use a custom-configured computer (these "open" versus "closed" debates appear frequently in television and video trade magazines such as Television Broadcast and Videography). Ultimately, the best course of action in choosing a system is to clearly understand the tasks you wish to perform, obtain as much information as possible (talking to other facilities to learn of their experience is a good place to start), and then make sure you investigate all the products on the market. In a technology arena as dynamic as this one, it's not impossible that a lower-priced system will outperform a more expensive one. Also, compromises in speed can yield savings; a less expensive system may offer all that its more costly counterpart provides--if you're willing to tolerate getting your work done more slowly, typically through increased rendering time. Whatever you choose, rest assured that CGI technology continues to offer more functionality at ever-improving price-performance ratios. Moore's Law is on your side.
The Future Of CGI
As mentioned earlier, CGI technology continues to evolve at a rapid pace and get cheaper as well. In addition to recent consolidations among software and hardware companies is the trend toward very small groups of developers creating advanced--yet inexpensively marketed--software to generate ever-more complex simulations of such natural forms as smoke, hair and liquids. An increasing quantity of this code is sold as "plug-ins," which add to the capabilities of existing graphics software. Adobe's popular After Effects software package now has hundreds of sophisticated and inexpensive third-party plug-ins available to increase its digital effects-generating capabilities.
On the hardware side, Apple's Macintosh operating system has enjoyed a resurgence of developer interest now that the company's original co-founder Steve Jobs has returned and reinvigorated the company with innovative and powerful products. These include the G4 workstation, so powerful the U.S. government originally classified this IBM/Motorola-processor powered machine as a supercomputer too sophisticated for export. Intel-based PC's running the Microsoft Windows operating system also continue to improve in computational horsepower. Sun Microsystems' UNIX and Java-based technologies also add power to CGI for applications as diverse as engineering visualization and Web design.
The operating system wild card is Linux. This public-domain operating system is free of charge, has attracted tens of millions of users, and runs efficiently on a wide variety of machines, including both PCs and Macs. Many consider it to be a serious contender to displace the aging and expensive operating systems that now dominate the market.
The Internet's booming increase in usage continues to spur demand for all kinds of CGI for business and enterprise. DVD, on the verge of becoming the next major home entertainment format, is a major vehicle for CGI. Virtually all DVDs require some original graphic work and the trend will increase as the format becomes more pervasive. DTV and HDTV will similarly create many new channels for all kinds of animated CGI content.
Besides the enhanced distribution of the Internet, advanced telecommunications are also making new production paradigms possible. High-bandwidth networking and connectivity are making virtual production "studios" more and more practical, allowing artists and producers to work remotely. This will enable new business and creative models to evolve in the industry. Such networking enables video and audio editors, CGI designers and animators to locate their own boutique production studios wherever they choose. The ability for such professionals to log into large commercial render servers for compute-intensive operations will become increasingly feasible, both technically and commercially.
This kind of industry expansion requires management, both in terms of the networks themselves and the CGI, video and other data they carry. The stakes are high and there are a host of relatively new companies vying to define a new industry segment known as "media asset management" (see Appendix B: Storage & Archiving/Asset Management). Such systems, as the name implies, are used to catalog, search, retrieve and distribute large databases of all kinds of media. Without question, the eventual impact of this technology will fundamentally affect the creative uses of CGI.
Despite the rapid evolution, advancing capabilities and increasing affordability of CGI technology, however, certain factors don't change. Chief among these is the value of a good story. CGI in entertainment, in the final analysis, is simply a new means of making moving images. And, as the success and failure of various recent Hollywood movies have shown, if you don't have a compelling story to tell, using the latest picture-making technology won't make any difference. Of course, that might change if computers get really good at writing scripts as well.
[an error occurred while processing this directive]