[an error occurred while processing this directive]

Delivery & Duplication

It's done. Now you have to deliver the program--either to the individual client or to tens, hundreds, thousands or even millions of people. And although your master might well be a masterpiece, the delivery and duplication process can play havoc with your work.

Analog comes back into the process with VHS as the 800-pound gorilla and BetaSP as the number-one professional format (with one-inch and U-matic still showing up here and there). Digital is also here, but that can mean QuickTime Movies, streaming Web video, CD-ROM and the newest entry vying for the hearts and wallets of consumers--DVD.

What you want for a master is a tape, disk or file of the highest quality, which might be a different format than the submaster used for duplication. Talking to your client and communicating with your duplicating facility will ensure that everyone gets exactly what they need for today and in the future.

Mass Duplication
Digital Be Damned: Consumers Still Love VHS

By Mark J. Pescatore

We can produce programming with the latest digital technologies, but for now, our final product for most consumers remains analog.

Plenty have tried, but no one seems to be able to knock off the king of the consumer format mountain, VHS. Betamax was better, the engineers said so, but the home audience would not listen. Then came clearly superior recording formats, more compact formats, prerecorded laser-accessed formats. And the general public responded with varying degrees of shrugging their shoulders. Sure, some of us have 8mm or even Hi8 camcorders at home, and more than a few have recently invested in DVD players. But generally speaking, when we're watching at home, we're watching VHS.

No matter what new format attempts to win the hearts of consumers, they will find a formidable foe in VHS, which has been the format of choice since its inception in the late 1970s. According to the Consumer Electronics Association (CEA), VCRs are in an estimated 91 percent of U.S. households and VCR sales continue to increase. The CEA reported record VCR sales of 18.1 million in 1998, more than an eight percent increase over 1997 sales. Lower unit prices and improved features (including improved resolution at extended play speeds) will no doubt help VHS continue its dominance of the home video market.

Although unit sales and market penetration pale in comparison to VCRs, DVD players are enjoying outstanding sales growth. More than one million DVD units were sold in 1998, up more than 200 percent over 1997 sales, and the CEA expects sales of 6.5 million units in 2000. Meanwhile, laserdisc (LD) continues to falter in the wake of DVD. Sales dropped from 49,000 units in 1997 to 20,000 in 1998--and most of those sales were DVD/LD combination players. Through the third quarter of 1999, VHS tapes also continued to outsell all compact tape formats (including VHS-C, DV and 8mm) by a margin of more than 5:1.

Digital VHS (D-VHS) from Hitachi, JVC and Panasonic is the latest attempt to convince consumers to change formats. In this process, the VCR records the MPEG-2 bit stream directly to the D-VHS videotape from a digital satellite system (DSS) receiver and plays it back through an integrated receiver decoder (IRD). The IRD decodes the MPEG-2 signal and provides playback with high-quality audio and video (more than 500 lines of resolution). D-VHS is able to record and playback high definition programs and even multiple program streams by recording the ATSC MPEG-2 digital television bit stream. And for today's consumer, the system can play and record standard VHS, so current tape libraries are not rendered obsolete.

Selectable data rates allow consumers to record up to 49 hours on a single DF-420 cassette (at the lowest record quality). Standard mode allows for seven hours of record time, while high speed (HS) mode, at 28.2 Mbps, will record up to 3.5 hours of high definition digital broadcasts on one cassette.

Sounds promising, even more promising than Super VHS (S-VHS) did when JVC introduced it in 1987 (though admittedly, S-VHS has carved itself a niche in some small market ENG and educational applications). However, there are some problems. First, the consumer has to purchase both a D-VHS VCR and the compatible DSS receiver to complete the system, which alienates cable customers. The second issue is price, as the JVC HM-DSR100DU D-VHS VCR has a suggested retail price of $999.95. (Panasonic's PV-HD1000 digital VCR designed for HDTV has a similar price tag.)

How likely is the average consumer to change tape formats? History tells us that home viewers are quite content with their VHS, thank you very much. And now, with many VCRs priced at less than $100 and many prerecorded VHS titles available for $10 or less, not to mention an extensive library of titles available for rental, it seems unlikely that consumers will abandon VHS anytime soon. If anything, consumers might supplement their VHS decks with DVD players, as prices continue to fall and more titles are released on that format. But the big selling point today is that consumers and professionals can easily record VHS at a very low cost, while affordable, recordable DVD is years away.

From Camera To Desktop: The Distribution And Handling Of Video As Files

By Robert R. Gerhart

Show me...

In this digital age, audio and video have replaced static forms of communication, and the computer is no exception to this rule. Lifeless text has given way to sound and motion. The problem has come in finding ways to translate information from traditional analog platforms into digital media that can be readily distributed in formats serviceable and sophisticated enough for the professional data handler, yet understandable enough to be usable by the average consumer.

Digital Versus Digitized Video

Today's modern studios use a mixture of digital, digitized and analog media for content creation. Most people in the video field, however, commonly refer to just about any computer-based moving picture as "digital video," regardless of its true origin and nature. While this definition is not entirely wrong, it is certainly less than accurate. To better understand the handling of video as files, a common frame of reference needs to be established.

True digital video is, quite literally, a digital version of analog video. It is defined by its signal characteristics under the guidelines set by SMPTE, which dictates the legal values, gamma, colorimetry and amplitude of the digital signal. These guidelines, like those of traditional analog video, provide a standard for the handling of digital media across a variety of equipment types and from different manufacturers. In contrast, digitized video is a product of the computer age. It can exist in a number of different formats, many of which are not necessarily compatible, and on a variety of computer platforms, most of which are also incompatible. And, unlike true digital video which is defined by its signal characteristics, digitized video is classified by the method used to store the media in the computer.

figure 1
figure 1. Digital media versus digitized media.

Understanding Video and the Computer

Digitized video (or media, as it often includes audio and other information) is probably the most common format in use today and, as such, is the focus of this article. It is the backbone of every nonlinear editing system, makes media transfer over the Internet possible and functions as the core technology behind every major communication media in use today. In order for computers to work with digitized media, however, the programs in question must be able to handle a variety of contingencies inherent with this technology. At the most basic level, they must first be capable of reading and, in some cases, writing the particular format or formats used for media storage. These reading/writing devices, known as codecs, are usually chosen by a vendor with a specific purpose in mind, such as the handling of video, audio, animation, MIDI or some other dynamic media type or combination.

For most broadcasters' and videographers' purposes in handling digitized media, specialized codecs have been written that give a system the ability to reach beyond the normal computer-based parameters of file size, data rates, window size and compression scheme to manipulate the variables internal to a media file's content. These can include video parameters such as line structures, aspect ratios, colorimetry and gamma, as well as audio parameters like sample rate, bit depth, stereo pairing and volume controls. With so many media types available, careful selection of the codecs supported by a program reduces the probability of error or confusion by the user and helps to produce a higher quality end product, with lower system operating overhead and a more affordable price. Taking this idea a step further, many manufacturers have either modified established codecs or created their own proprietary versions, ensuring maximum compatibility with their programs at the expense of inter-program or cross-platform file sharing.

Also note, digitized media seldom remains in one place for very long. A program designed to handle this media must be able to work across the different platforms used to capture, store, manipulate, transfer and view it. The program must be able to recognize both the media type and content, as well as the file type the media is stored in and its platform of origin. Though some would argue that this is more an aspect of computer networking than of digitized video, it is still important to the overall procedure of handling video as files and therefore necessitates consideration.

Codecs and Compression Theory

Codecs are the functional part of a program or operating system that contain both compression and decompression algorithms. They are an integral part to the creation of both digital and digitized video, as well as a variety of other media. These algorithms determine with which formats of digitized media a program can work. Fast-compressing codecs increase the efficiency with which digitized media can be created, while fast-decompressing codecs increase the speed at which the user can open and manipulate the finished file. Obviously, the faster that both of these processes can be performed, the better. Decompression, however, is usually most important--especially for CD-ROM, network and Internet-based delivery applications.

Compression and decompression times are often not equal given the same data stream; codecs that provide higher compression ratios tend to require substantially longer times to compress than decompress--these are known as asynchronous codecs. Two main factors will determine which codec is best suited for a particular type of media: the speed of the codec's compression algorithm and the reproduction quality of the images being compressed. Each codec takes advantage of different properties of a file to achieve compression, so the type of material being compressed and how it was produced significantly affects both how much compression can be applied and how well the codec can reproduce the compressed material. Compression quality can be adjusted at two levels during the creation of a digital or digitized video file, spatially or temporally, depending on the codec involved.

Spatial Compression is the process of applying compression to individual frames. It eliminates the redundant information within each frame but does not affect the frame relative to the image sequence of which it is part. The best example of this type of compression is found in JPEG images, where the tradeoff between the quality of an image can be easily compared to its file size. In most codecs, key frames are usually only spatially compressed.

Temporal Compression is the process of applying compression to a sequence of images such as video and multimedia. The individual images themselves can also have spatial compression applied to them, either before or during the temporal compression process, but this is not always the case. Temporal compression is applied to the different frames of an image sequence--those frames that are not designated as keyframes. This compression technique takes advantage of the fact that, in general, a given frame has much in common with the frame that preceded it, therefore the codec only needs to recognize the differences between the frames and store the changes that have occurred from one frame to the next. Keyframes function as the reference frame for the image sequence that follows, and therefore need to be complete images. It is for this reason that they are not able to be temporally compressed.

It's A Codec Moment: The Working Codec

There are many different codecs available to the computer user today. More than a few have been created or modified to work with specific application programs or hardware systems. Designed to compress a wide variety of media including video, animation, audio, MIDI, time code and more, most offer more than just spatial and temporal controls. The setting of these controls will play an integral part in determining the nature of the media file being created and can influence both reproduction quality and file size dramatically. The following list will examine a number of these parameters and how they affect the media file.

Keyframes are the designated frames in an image sequence that provide a point from which a temporally compressed sequence may be decompressed. The use of more keyframes in a sequence will increase the overall file size of the finished media file, but will benefit the enduser by permitting more effective random access to any part of the sequence and improving a sequence's reverse playability. If these features are not deemed important, the number of keyframes can be reduced accordingly. However, sufficient keyframes must be present to allow the media player to keep video, audio or other data synchronized during playback. The creation of keyframes can be either natural or forced. A natural keyframe is the first frame of a cut in the image sequence, while a forced keyframe is one that is created arbitrarily by the application program itself, or the codec being utilized by the program for compression. Many applications and codecs will automatically create a forced key frame when they detect a certain percentage or greater difference between the current frame and the previous frame in an image sequence.

Frame Rate--Most applications allow you to select the desired frame rate for playback. This number can be set to any amount, however, for smoothness of playback, it is strongly recommended that a frame rate which is a sub-multiple of the source rate be utilized (from a digitized NTSC source of 30 fps, you should use 30, 15 or 10 fps; from a digitized PAL source of 25 fps, use 25, 12.5 or 6.25 fps). Reducing the frame rate of the media file will obviously reduce the file's size and increase playback stability accordingly, but an excessively low- frame rate will create jerky movement in the action and erratic or unfinished looking transitions.

Frame Size--Most traditional analog broadcast professionals recognize television frame size as being either 640x480 pixels for NTSC or 720x576 pixels for PAL. Many media-handling programs in use today come with a variety of commonly-used frame sizes preset, including these, and often allow for the manual creation of custom aspect ratios for alternate applications. Programs that support the use of custom frame sizes can often be set by the user to either crop or distort an original image to fit different frame dimensions. For draft purposes or non-broadcast applications, some programs offer the option of outputting reduced frame sizes without changing the working resolution, usually featuring settings from full to half, third or quarter of the original size, depending on the software being used. Smaller frame sizes not only create smaller file sizes for the finished media file, but provide easier playback on older or less-sophisticated systems. When full-size resolution is not critical to the final product, reducing the frame size is an easy way to create significant increases in media playback performance.

Pixel Shape--Once upon a time, the only answer was "square." With the advent of digital video and HDTV, that is no longer the case. Modern media handling, especially for broadcast applications, requires software to have the ability to either set or select a pixel shape compatible with the aspect ratio of the media's final output. This range often spans from traditional NTSC square pixels to those necessitated by the numerous aspect ratios and line configurations of digital video formats and HDTV standards, with many programs offering user-selectable or custom settings.

Color Depth--Many programs and codecs also allow you to select the number of possible colors, or color depth, of the media file being created. Though most computers today are capable of delivering a color palette into the millions of colors, only a small portion of this is ever needed or utilized. As human vision is limited to a only small fraction of that possible spectrum, it makes sense to eliminate those frequencies that are beyond or below our range of perception. Limiting a media file to the lowest possible number of colors can make for substantial reductions in file size, as well as increase its playback stability on older or less sophisticated systems. Eliminating color altogether, of course, takes this efficiency a step further, though this is usually not a feasible option. Care needs to be exercised when reducing color depth in addition to applying video compression. Too much compression used in conjunction with an overly limited color palette can cause posterization, solarization and a generally overall poor image quality, especially when applied to an intricate or dithered source.

Data Rate--This setting designates the amount of data provided or required at a specific moment in time to play a particular media file. Control of this number (measured in Kbps) is an important factor, not only in regulating the final file size of the finished file, but in determining the minimum system configuration required to decompress and play the media without problems or interruptions. Higher data rate settings will create a media file with better overall quality, but too high of a setting can cause problems with playback on less powerful machines or those with slower peripherals (when playing back through a network, over the Internet or from a CD-ROM). Lower data rate settings will reduce overall file size and increase playability on older machines, but will make the media look unclear and "primitive," especially on more technologically advanced systems. It should be noted that in codecs where you can set a limit to a file's playback data rate (for example, Cinepak), the spatial and temporal quality settings are adjusted dynamically during compression, so the specified data rate is not exceeded as the finished media is decompressed.

Hardware- and Software-Based Codecs

Codecs, for all their numbers and diversity, essentially come in two varieties: hardware-based and software-based. Usually tied in with specific program packages or operating systems, they often constitute a primary part of the host system's functionality.

Hardware-Based Codecs, as the name implies, require a hardware component, usually in the form of a peripheral board or sub-processor module, to function at their optimum efficiency. Though most hardware-based codecs can operate without this component, their efficiency is usually greatly reduced and some, designed to run within specific applications, are unable to function at all. The benefit of working with hardware-based codecs is recognized in their superior performance. The hardware element allows the codec to handle markedly higher data rates without dropping frames or losing synchronization with related media, thus enabling the user to work in greater resolution, with larger aspect ratios and at higher frame rates than software-based solutions.

This performance is not without its drawbacks, however. Hardware-based codecs are considerably more expensive than their counterparts, largely due to the cost of the hardware component. Their usefulness in some situations is also limited because of their connection to the hardware--media files created with these kinds of codecs usually are only able to be viewed on computers containing similar hardware. Also, the flexibility and features of the codec are often dictated by this symbiosis--if it isn't supported on the hardware, the codec is probably incapable of doing it, now or in future revisions, without modifying or replacing the hardware element. Finally, updates, if available, are usually hardware- or firmware-based, and may necessitate a trip to an authorized service center to complete.

Software-Based Codecs, on the other hand, rely on the computer's own CPU for their computational power and, therefore, can be installed on any system with sufficient processing ability. Available in greater numbers and for more diversified applications than their hardware-based counterparts, software-based codecs are considerably less expensive and usually offer more available features, sacrificing top-end performance for enhanced functionality. Their effectiveness is dictated solely by the host system's processing capability. Many systems that rely on hardware-based codecs for their main functionality will often incorporate software-based codecs for inter-program or cross-platform media exchanges.

Getting There is Half the Fun

The world is analog and, as we've seen, computers (and therefore the future of media content creation and distribution) are not. At some point, content will have to be converted to a digital format of some nature so that it can be edited, manipulated, integrated and distributed. That process can take place at the time the media is acquired or at some point in the future, but it will happen. Codecs will define the media's parameters as it is captured and stored, but it is the hardware and related software that will dictate when and how the media will enter the digital realm.

To get the most out of your media, "digital in" is obviously the best place to start. This means getting the A/D conversion process as close to your source as possible. This will ensure the clearest possible recording while, at the same time, reduce the chance for degradation to a minimum. Starting with a clean source will make any future digital manipulation easier and faster, as errors and artifacts require more processing time and storage space to address. Computers, as a rule, do not know the difference between an error, artifact or element, and will treat them all with equal enthusiasm, expending valuable processing resources in doing so. Failing "digital in," the next best alternative is to digitize from the cleanest possible source material using the highest quality equipment, cabling and connectors available. A wide variety of possible alternatives is presented at this stage, ranging from simple pieces of software that utilize a computer's stock input ports and system CPU, to elaborate and exotic nonlinear editing workstations with specialized hardware and custom software and codecs. Obviously, at this level, some sort of determination needs to be made based on the media, its origin, its intended use and its final destination as to what sort of equipment will be employed to digitize, store and otherwise manipulate it.

Once digitized, source media often changes form several times before reaching the output and distribution stages. Editing and other content manipulation usually takes place at the highest possible quality level, during which time the primary media is often combined with additional material such as audio, data tracks or other information. The resulting program is then output as a self-contained file. It is usually at this stage where preparation for distribution takes place. This can be done either by the software that created the file, or by another, more specialized media manipulation program. Preparations can include repackaging the media with a new codec, reductions to the data rate, color palette, frame size or frame rate. Equal attention must also be given to available audio tracks, which can be eliminated or combined, have their sample rate or bit depth reduced or any one of a number of codecs applied. All of these factors and more are reliant upon the media's final destination and playback methodology.

Gently Down the Stream

With the number of personal computers on the rise, and the future of broadcasting going digital, it is not hard to envision the eventual nirvana that will be streaming content over the Internet. Though increasingly more common, streaming video is still in its infancy; it has a long way to go before even beginning to approach broadcast quality as we know it today. Several companies and numerous technologies continue to make headway in this area, but the core process remains the same--content must be digitized, prepared and then distributed. The venue is the Internet and the limitations are numerous and ever-changing. Bandwidth constrictions and modem limitations make realtime streaming problematic at best. To overcome this, a technique called Progressive Downloading is most frequently used, whereby media is downloaded onto a computer's hard drive and played back from there, instead of being processed in realtime directly from the data stream. Transferring and viewing media in this way has numerous benefits, including reduced bandwidth demands and improved media quality and playback reliability. Files can be created with the more space-efficient, if time-consuming, codecs instead of those designed to move quickly at the expense of quality.

The popularity of streaming media will continue to increase in the coming years and, with it, so will the technology to make better files faster and move them more efficiently. Streaming media is no longer reserved almost solely for professional Web masters or large corporations, but is now available to virtually anyone with a good personal computer and a reliable Internet connection.

Playing With Your Media

Players are a computer's software architecture for allowing users to access video, audio, animation, text and other dynamic information. Available in a variety of formats, from stand-alone programs to system software components and plug-in drivers, a multitude of players are available for each of the different computer architectures commonly in use today--Macintosh, Windows and SGI. Some players, such as QuickTime, have versions for more than a single operating system (including Java), providing comparable features as well as a standard of compatibility for exchanging files between programs and computers. Players are usually self-contained packages which work in a stand-alone capacity to view, and sometimes manipulate, media files. They can be as simple as a small shareware program that will allow you to play files downloaded from the Internet, or as complicated as a high-end nonlinear editing or compositing workstation costing tens or hundreds of thousands of dollars. More sophisticated players are often able to access any additional codecs that might be available on the host system and incorporate their functionality into its own. In some cases, especially when working with hardware-based codecs, special players are provided by the vendor which are designed to interface with the non-system hardware for enhanced performance.

Like codecs, players vary in form and function. Even the most basic may look simple from the user's perspective, but underneath they often have to handle a variety of rather complicated functions. QuickTime, for instance, probably the most commonly installed player available, is composed of over 200 separate software components divided into more than 20 different categories. This component architecture not only includes system software, but also compression facilities, human interface standards and standardized file format recognition. Its modular design allows for timely updates that support new technologies and enhancements to existing ones without necessitating costly or lengthy revisions. These components combine to create a cross-platform architecture that allows developers to create multimedia content once, then distribute it across multiple platforms with virtually no additional work.

As with video, audio media can be played and edited in much the same way. Again, a variety of players are available ranging in price and sophistication, from small shareware programs to sophisticated digital audio workstation systems. Most of the time, however, the same player used for accessing video media functions works quite well with files containing only audio information. Like their video counterparts, audio players are available which are not only able to play audio media, but edit the audio content as well. These usually incorporate a selection of built-in codecs, and may be able to access additional ones available on the host system.

Picking a Player for Your Team

A good player, or application that incorporates its own built-in player, is one that does not have to rely on extensive driver libraries for support of other platforms, applications or configurations. It will natively support the relevant codecs and give the digital content creator the ability to easily view or manipulate media from a variety of sources without concern for file or format compatibility. Whether your application for digitized media is as involved as nonlinear film compositing or as simple as the creation of a few basic animated files for the Internet, a general understanding of the codecs and principles involved will go a long way to helping you create better, more effective content for your viewing audience.

Delivering Video Over the Internet

By Sheldon Liebman

The past few years have seen a great deal of activity in the area of delivering video content over wide area networks (WANs). Previously, this process involved expensive, high-speed private networks. More recently, steps have been taken to use the Internet and its global connectivity as a conduit for video delivery.

One of the reasons private networks have made sense in the past is that many companies were trying to deliver realtime video across a WAN. The use of private networks ensures that both parties can achieve a connection speed and quality of service that supports the sharing of video content, which requires a tremendous amount of bandwidth. One second of uncompressed ITU-R 601 video is over 30 megabytes (MB) in size, which is very difficult to transmit in realtime. For this reason, compression technology is usually applied to the source material. With MPEG-2 compression, the bandwidth requirements can be reduced significantly, but high-speed links are still required for realtime sharing of video.

However, there are problems with this approach. The first is that both locations need to be available at the same time in order for it to work. The second is that both locations need to be connected at the proper speed. Even if one location can send realtime video, can the other receive it in realtime? Finally, there is the issue of network reliability. In a realtime environment, it can be annoying if there is a hiccup in the network and the stream is interrupted.

Increasing the Options

For many applications, however, the use of realtime video is a bonus rather than a requirement. After all, most video applications today are based on sending videotapes from one location to another. Even broadcast programming is often received at one time and broadcast at another. Unless the sender and the receiver need to view the material both immediately and together, realtime delivery is optional.

If the realtime requirement is removed, a store and forward process can be used. With this technique, the transmission process is completely separated from the creation and playback process. Instead, video clips are stored at the source and at the destination, increasing the options for transmitting the information.

By choosing this approach, television stations and production facilities can transmit material to remote locations whenever they choose. They can also control the quality of the material, which determines the size of the video file that is created and the amount of time it will take to transmit at a given bandwidth. For approval video, lower resolution settings can be used; for final copy, broadcast quality material can be transmitted that is full-resolution, full-motion and full-screen.

A Public Solution

In early approaches to this problem, lower-speed private networks were used. More recently, companies have developed ways to use the IP protocol and IP addressing to move material from one station to another. With video over IP, the world's largest public network, the Internet, is now available for transmitting video.

This process is based on standard FTP transmission and allows any device with an IP address to transfer a digital video file to any other device that has an IP address. This technique offers many advantages.

The most obvious advantage to using the Internet for video transmission is that connection speed requirements disappear. The initial connection is simply a gateway to the very high-speed Internet backbone. In this way, companies using low-speed connections (like traditional modems) can send information to companies using medium-speed (DSL or cable modems) or high-speed (T1 or higher) Internet connections. In fact, with the development of satellite-based Internet connectivity, even the need for a physical connection is removed, with the advantage being that only the intended recipient(s) can actually receive the information being transmitted.

Using IP and the Internet also makes it easier to distribute video information to more than one location. In a multicasting environment, many locations connect with a single server to access the same (video) file at the same time. This is similar to current video streaming technology over the Internet, although the result is a digital file containing broadcast quality video instead of a realtime display. Or, if the server mimics the functionality of a traditional e-mail server, the same file can be sent to multiple recipients over a period of time. The difference between these two approaches is which party controls the process.

Another advantage to using the public Internet is that both sides are insulated from network problems. If a packet of data doesn't get through correctly the first time, it can be sent again and the process is completely transparent to the user. In the end, the resulting video is the same, whether or not the network is operating at peak efficiency.

Non realtime systems transmit over the Internet every day. Home PC users capture video onto their hard disks, transmit those videos as e-mail attachments to anyone they choose, and the recipients can watch the video on their computer screens. In a broadcast video environment, the biggest change is the quality of the captured video.

This illustrates one of the biggest advantages of transmitting video over IP--it can go from virtually anywhere to virtually anywhere. If an Internet connection is available, you can send and receive video simply by knowing the recipient's IP address. Also, since many companies are already paying for full-time Internet connections, there may be no additional connection cost associated with transmitting clips using this process--it just piggybacks the connection you already have. Size and distance are no longer factors.

Of course, transmitting large files with FTP is not always a smooth process, especially if the receiving station doesn't have enough room to store the entire clip being sent. This is one of the problems faced by companies who want to transmit video files over IP, but solutions have already begun to appear.

Another issue that needs to be addressed is bi-directional communication. If a very large video file is being transmitted from point A to point B, how can we verify that it arrived?

In the same area, methods must be developed to deal with errors and glitches in the process. If an error occurs in the middle of transmitting a 1 GigaByte (GB) file, does the entire file need to be transmitted again or is there a way to pick up from where the error occurs? Ultimately, if a transmission is unsuccessful, can a way be found to automatically retry until it can succeed?

The research being done in the area of video over IP suggests that solutions to these and other related problems are just around the corner. Once they become available to everyone, the method by which we transmit video from one location to another may never be the same.

Internet Video: Bandwidth, Buzz And Interactivity Deliver a New Medium

By Jon Leland

Only in the astounding world of the Internet could a technology that is barely five years old inspire (at least in part) the biggest merger in entertainment history. As many journalists have already explained, perhaps the biggest factor in AOL's decision to acquire Time-Warner was access to the latter company's cable modem (and future cable modem) customers. Of course, the most commonly mentioned use of the broadening of Internet bandwidth (as delivered by cable modems among other technologies) is video. Video on the Internet is still only an emerging marketplace, though video programming on the Internet is already being pioneered and delivered. While the future convergence of Internet programming and DTV channels is one dimension of the future, the emergence of video on the World Wide Web has already taken on a life of its own.

In fact, the Web is not one thing. It is many. And the world of Internet video also has many faces. For example, there are entertainment sites including extensions of existing broadcast channels, sites that are ramping up to deliver pay-per-view movies (such as MeTV.com) and creators of original Web video content. This later rapidly expanding market includes such major players as Pop.com, a co-venture of Dreamworks SKG and Imagine Entertainment, and Macromedia's Shockwave.com, which has signed deals with South Park creators Matt Stone and Trey Parker, as well as film director/animator Tim Burton, among others. In addition, there are many other kinds of video being distributed on the Web. These include training, corporate communications and distance learning, as well as personal/amateur productions of all kinds. Likewise, there are also a variety of technologies in use while others are still being developed.

Essentially, the technology that supports the realtime delivery of both live and on-demand video programs has evolved rapidly; at the same time, audiences have enjoyed increased access to faster modems and higher performance (broader bandwidth) Internet connections. When combined with the Web's own unique forms of interactivity, a completely new media platform is being created.

In fact, on-line video created a "buzz" for being one of the next "big things" (i.e. an important, emerging Internet technology). However, its unique character and grass roots beginnings may lead one to believe that traditional broadcasters may not have the necessary new media experience to compete with more Web-savvy start ups. Just as many "bricks and mortar" retailers have suffered at the hands of their more virtualized e-commerce competitors, broadcasters are certain to be challenged over the next few years by a new medium being built on completely new technologies and with a whole new variety of viewer relationships.

Tech Perspective

Never before has a video distribution platform been so easily accessible to so many, and when combined with cost performance breakthroughs like DV production and desktop post production, video on the Internet represents the possibility for a grass roots revolution. In its start-up years, video on the Internet was limited by bandwidth. While that's still true to a large extent, it is changing rapidly. As the cost of computers drops radically while PC performance continues to accelerate, some experts think the cost of higher bandwidth is dropping even faster. One Internet video executive told me, "The cost of Internet bandwidth [bits per second] is dropping even faster than computing power [MIPS]."

Innovations like DSL and cable modems, combined with the increasing accessibility of corporate and educational networks, are creating an on-line world that is increasingly rich in video quality-enhancing bandwidth. This ever-expanding pipeline is certain to enable more and more "VHS-quality" Internet video viewing experiences in the very near future.

Streaming Complements Downloading

Historically, in the "old days," (which in "Internet time" means just a few years ago), all video on the Internet consisted of digital files that had to be downloaded. At the time, this was a time consuming process. The first significant Internet application to break the "downloading barrier" was Progressive Network's Real Audio. (That company is now the streaming industry leader and has changed its name to RealNetworks.) For relatively early Web browsers, RealAudio introduced the concept of "streaming" media. By utilizing compression and a RAM buffer, Real Audio enabled the immediacy of virtually realtime audio playback. RealVideo followed in less than two years. However, while streaming is the standard for realtime, live Webcasts, the improved quality of the media files that are now available for download via higher bandwidth connections has lead to a resurgence of downloaded digital video. The significant popularity of the MP3 audio format is the most prominent example of this trend, and downloading continues to be important as an ongoing component of the on-line video environment. This was underscored in the summer of 1999, when Apple reported 23 million downloads of the QuickTime version of the trailer for the movie Star Wars Episode 1: The Phantom Menace.

Multiple Streaming Standards

Despite the lesser quality, streaming has become the most popular way to view video over the Internet because of its immediacy. Clips are available for viewing much more quickly. Technologically, streaming utilizes a client-server software system to load the first part of a media clip into a memory buffer on the user's PC. Then, while that segment begins playing on the viewer's screen, the software streams the next segment into the buffer so that it is ready to begin playing as soon as the first segment is finished. Thus, streaming provides audio and video-on-demand (VOD), including access to live events, over the Internet. Now, with all new computers shipping with at least a 56K modem, and with many users getting even faster connections, the quality is improving quickly.

While there were initially almost a dozen companies battling for the Internet streaming video standard, three remain as viable leaders. The current leaders in terms of streaming and other multimedia architectures for the delivery of video via the Internet are RealNetworks with its RealPlayer, Microsoft with its Windows Media Player and Apple with its QuickTime platform. At the time of this writing, the most popular Internet video sites most commonly offer RealPlayer, with Windows Media Player a close second (and frequently offered as well as RealPlayer). QuickTime is more frequently found on smaller Web sites where files are offered for download (or "progressive download," explained below). Apple launched the streaming version of QuickTime during 1999 and it is yet to enjoy widespread use.

Regardless of the streaming platform, video on the Web offers a new kind of viewer control over video material. This is the first generation of truly interactive video on a network; and Web enthusiasts appreciate the power to view whatever video clips they want whenever they want them. (As an aside, the streaming market also includes other "microcast" audiences and applications. These include, for example, SitePath and Cisco IP/TV, which are targeted to corporate Intranets, as well as other forms of hybrid distribution including satellite services, video conferencing systems, WebTV and other so-called Internet appliances.)

Types of Streaming

Even with the partial shake-out in delivery software options, Internet video is still a complex medium to comprehend because it continues to support a variety of delivery formats. These include live events and on-demand programs, single streams, multiple streams and multicasting. In fact, the competition is heating up to deliver the massive server and distributed networks necessary to Webcast to larger audiences.

Network services companies like Akamai (which has an offer pending to buy Web video service provider InterVU), Digital Island and others are competing with streaming video hosts like the Real Broadcast Network (RBN) and Yahoo Broadcast.com to deliver these services. However, few of these vendors agree on a specific technological approach. Each one claims to know the market better and brags about its proprietary systems and technologies. (For more on this please see my article, "The Pomp and Promise of QuickTime TV" at www.mediamall.com/promedia/videoWeb/QT_TV1.html.) There are also different kinds of Webcasts, including live and on-demand. Live Webcasts have less flexibility and must be delivered with lesser quality because the compression of their footage must take place in realtime. On the other hand, live Webcasts can also use a technique called multicasting to reach larger audiences with far less server capacity than single stream events. On-demand programs are available whenever requested, 24 hours a day, seven days a week, and can feature high quality and the benefits of post production.

In order to face the challenge that I call "The Grand Canyon" Gap between people and technology, I have found it necessary to create some of my own terms for the various types of streaming video applications. I break these technologies into four general types:

Pseudo-Streaming: Also known as "progressive downloads," this approach is not true streaming because it uses hard drive space rather than a RAM cache or buffer. Pseudo-streaming is a file download that is enhanced to enable the viewer to "screen" part of the clip during the process of the download. QuickTime, which has pioneered this approach, uses a technique called FastStart that enables the video to start playing as soon as there is enough material transferred to the user's hard drive. However, once the viewer reaches the end of the what has been transferred (assuming that the playback is faster than the download), he must wait until more content has been transferred before continuing to watch. As a monitor for this experience, the QuickTime play bar, which is located below the digital video image, fills progressively with a black stripe in order to show the user how much of the video clip is currently available for viewing.

Since pseudo-streaming is, in fact, a standard TCP file transfer, it also introduces a copyright issue that is of concern to some producers but is not an issue with streaming. As streaming constantly caches and flushes the video stream from the user's computer memory, the user never receives a copy that can be saved or distributed. Downloading, on the other hand (as illustrated by the MP3 phenomenon), may provide a copy of the digital file with which the user can do as he or she pleases.

Mono-streaming: This is the basic approach to streaming. It delivers a single video or audio clip that's fed in realtime, and thus avoids downloads altogether. This approach is supported by a dedicated server such as a RealVideo, Windows NT or Apple QuickTime server that utilizes the UDP (rather than the TCP) protocol. Streaming delivers the audio or video clip while it's being viewed or listened to, and for this reason there's no disk storage (which also inadvertently protects copyrights). Its most important feature is immediate, on-demand access over the network. It is applicable to either live events or to stored video programming.

As mentioned, Akamai and other providers are building networks of streaming servers on literally hundreds (and soon thousands) of servers around the country and the world to facilitate streaming delivery to thousands (if not millions) of simultaneous users. The essence of this kind of service is proprietary software that instantaneously finds the content you want on the best server on that network depending on your geographic location The selection of the most appropriate version of the requested clip is determined intelligently by specialized software depending on the content, various servers in various locations and a "weather map" of Internet traffic at that moment.

Multistreaming: This is a term I've invented. It is also referred to as "synchronized multimedia" and in some cases as "illustrated audio" or "illustrated video." Multistreaming combines the audio or video streaming process described above with other synchronized media content, such as scrolling text and data streams (like realtime stock quotes), HTML page flips, synchronized images, and MIDI sound. All of these media types are synchronized with the main video or audio stream to create a visual presentation composed of multiple simultaneous streams. The images and other media types are synched to timings in the audio/video track (think of Ken Burns' Baseball series on PBS), but they are displayed next to the clip (or elsewhere on the screen) rather than as part of the video clip.

The most dynamic use of these capabilities is currently demonstrated by the G2 versions of RealPlayer, which offers a window that can include multiple media types and a selection of streaming video channels. The multiple media types are displayed within the RealPlayer video window using the Synchronized Multimedia Integration Language (SMIL) for layout. In this context, streaming video clips are most frequently complemented with clickable links that offer immediate access to other clips, such as other news stories. QuickTime also supports integration of multiple media types within one window; however, these capabilities are not as widely used, certainly not in the streaming space. I believe multistreaming is a very important development and can help to differentiate the on-line video viewing experience. Yet, it is still largely underutilized.

Multicasting: This term has been misused, in my opinion, by broadcasters proposing to multiplex several compressed digital channels into their new DTV spectrum allocation instead of broadcasting HDTV. In Internet terms, multicasting has already been in use for years, as it was pioneered by the MBONE. Multicasting refers to using multiple servers to "spread the load" of streaming in order to multiply the number of available streams, and thus expand the potential audience significantly. However, it is important to note that multicasting can only be used for live events and live programs.

Here's how it achieves its distribution advantages. If one server delivers 100 streams to 100 other servers that, in turn, each deliver 100 streams to users, you quickly have 10,000 available streams originating from one server, but without any one server carrying an impossible load. MCI and RealNetworks, as well as perhaps the largest Internet Webcaster, Yahoo Broadcast.com, are building dedicated multicast "networks" or server systems.

In fact, Yahoo Broadcast.com founder and CEO Mark Cuban insists that multicasting is the only viable way to build a network of servers for large, live Webcast events. He told me, "When it's the bottom of the ninth [in a live baseball game] and the server on one of your ISP's network fails, what are you going to do? Page the ISP and ask them to fix it? It's easy to route around when there is just one thing going on. But when there are 30 or 40 or 1,000 [live] events like we have going on, then one server failing impacts tens or hundreds of events." For these reasons, Broadcast.com is building a multicast network which uses distributed routers instead of servers. Cuban claims that they already have 700,000 "multicast-enabled dial-up and broadband ports." Cuban immodestly claims that his network "blows away distributed servers for live [events] any day of the week." For more information on Yahoo Broadcast.com's multicast network, visit: www.broadcast.com/about/multicast/.

The Components: How Streaming Works

The best available quality video streaming requires a combination of a UDP server, a compression codec and a client or browser player. Here's how these components work together to provide the virtual miracle of realtime video at dial-up bandwidths.

Servers: Perhaps the most fundamental component of most multimedia servers is the introduction of the more sophisticated UDP (User Datagram Protocol) networking protocol rather than the Internet-standard TCP (Transmission Control Protocol). TCP is a transport-oriented protocol that makes reliability its first priority. The downside is that it is inefficient for data-intensive applications like streaming media because it is "chatty" as a result of a continual process of data receipt confirmations. In the world of streaming media, where dropped packets are to be expected, the resulting attempts at data recovery take longer than they are worth because they hurt performance. However, the TCP protocol is simpler to configure because it runs on a net-standard HTTP server; thus, when video clips are being downloaded, it treats video or audio just like any other data type.

UDP, on the other hand, while delivering adequate reliability, is more efficient. Roughly speaking, the UDP software application tells the server, "I'll tell you if something important breaks down; otherwise just keep sending more data." However, because UDP is not the Net's standard protocol, it requires specialized server software. Beyond performance, UDP also enables important and more sophisticated streaming functionality, such as buffering, load balancing (to facilitate the efficient delivery of more streams from one server), live streaming, multistreaming, multicasting and random access play (so a streaming clip can have pause and rewind buttons).

According to one engineer, "The Internet is a dirty place where one out of 10 packets gets dropped. On a good day, delivering a video stream without a specialized server might work. But when you as a content provider have limited bandwidth or anticipate a big volume, you'll want a server that makes the best possible use of the available bandwidth." Clearly professional media companies require higher performance as well as interactivity-enhancing features.

For example, one chief technologist for a large video-oriented Web site told me that he assumes an audience of thousands of users. His broadcast orientation demands UDP performance and begins to define streaming on terms that go beyond most of today's applications. To this man, a single stream is just the beginning. He said, "Mono-streaming audiences that fill a T-3 (bandwidth for hundreds of viewers), that's a demo." In other words, hundreds of individual streams just won't create an audience big enough to support a commercial Webcast.

Codecs: Since the compression-decompression algorithm known as a codec is frequently bundled with a server-player platform, many people confuse the codec with the server-player. That is changing as the players attempt to become utilities. For example, RealPlayer, Media Player and QuickTime are now supporting as many different codecs as possible (i.e. AVI, MPEG, etc.), with the obvious exclusion of their direct competitors (i.e. Media Player does not support the RealVideo codec). For this reason, compression is increasingly becoming player and server independent; however, the codec "connection" to server-player systems continues because there are yet to be any real codec standards. Even MPEG, for example, has several different low-bandwidth versions. As a result, the relevant codecs that your system needs to play a particular streaming video clip are still most commonly downloaded and installed as part of the video player system set up on the user's PC.

Players: On the browser or user's system, a video player is required. Frequently these are integrated with the browser either as a plug-in or "Helper" application (Netscape Navigator) or as an Active-X controller (Microsoft Explorer). Like having a properly configured set-top box on a cable system, these software receivers are required for the integrated software delivery that streaming demands. Some solutions, such as Emblaze, use of the Java programming language to provide "playerless" video delivery, however, Java's quality compromises and the technical glitches are still too common to make this approach popular.

Technology Trends

The streaming "arena" continues to change rapidly. While there are certain to be unpredictable changes ahead, the following are three major trends that appear likely to continue, at least in the near future. These are important to keep in mind as you and your company plan your participation this new media "universe."

The Platform Wars: Although there have been (and still are) many competitors, RealNetworks claims that 75 to 80 percent of the streaming content on today's Web is in one of its formats. However, just as they did to Netscape in the browser wars, Microsoft is threatening to change this situation, and its market share in the streaming market is growing the most rapidly at the time of this writing. In addition, Apple's QuickTime is a familiar platform to many video producers, and QuickTime is already integrated into many video nonlinear editing systems. While QuickTime made a late entry into the streaming market, there's still time to play catch up, and Apple's corporate resurgence underscores the fact that they should not be dismissed. In addition to evolving its own technology, Microsoft has an especially strong position among many corporate networks, and they are using their exceptionally strong cash position to make strategic investments (including $30 million in InterVU) to leverage an even stronger position in the Internet video market. Furthermore, many corporate networks are standardizing on Window NT servers, and Microsoft's media server is bundled free with the Windows NT server (recently upgraded to Windows 2000). Of course, the Windows Media Player, as well as the most recent release of Internet Explorer, is now standard with Windows 98, enabling Microsoft to conveniently extend its streaming player penetration. RealNetworks may be the current leader, but Microsoft is clearly taking aim at this market and we have not heard the last of Apple. In fact, Apple's interest in the digital video market is growing. This interest is underscored by Apple's acquisition and release of its Final Cut Pro digital video editing software and its release of special DV versions of the iMac.

These three players are clearly the streaming platform players to watch.

Better Bandwidth: As mentioned, the increasing accessibility of broader bandwidth connections and the emergence of additional technologies will further enhance the on-line video viewing experience. From DSL connections to corporate intranets and cable modems, from satellite systems to wireless systems and other forms of video network system offerings like SitePath, there's no question that higher quality video is on the way.

In fact, researcher Paul Kagan & Associates reports that there are 137 million homes which are "broadband ready," and they project an Internet broadcasting market just shy of $20 billion by 2008. The result is that broadband video providers like Excite@Home and some of the larger telecommunication companies have begun to experiment with special video programming designed especially for high bandwidth customers. This trend is certain to increase. As the availability of broader bandwidth increases, so will the variety of streaming video programming.

New Content Formats: Since Internet video is presented on the Web and is usually triggered from a Web page, its relationship to the Internet's vast quantity of print and graphics information naturally complements the video presentation. Internet video exists within the medium of the World Wide Web itself. This is an interactive relationship that not only changes audience dynamics, but also presents (for better or worse, depending on the skill of the producer/developer) whole new varieties of communication challenges. Furthermore, it presents a whole new generation of advertising/sponsorship applications that are also unfolding right before our eyes.

Announcements like the Reuters investment in Virage (the video indexing software company which demonstrated keyword-searchable on-line access to video footage during the Clinton-Lewinsky scandal) indicate increasingly broad search capabilities for video on the Web. Short form programming (for example, video clips up to 15 minutes long) also seems to be more appealing on the Web. In short, this new medium is still being invented; the only thing we can say for sure it that it is not TV. What it is remains to be seen. Watch for more innovative and varied on-line presentation environments as this on-line video space evolves.

Challenges And Obstacles

The leading edge wouldn't be referred to as the "bleeding" edge if it didn't have its share of challenges and obstacles. Here are four important challenges to consider:

Revenue: Large-scale Web sites seem to succeed more on the basis of market capitalization than on profitability. While there is broad agreement that the world of Web media is a huge opportunity, it also seems clear to me that developing profitable businesses will take time. Meanwhile, the Web will be used, as it is today, as a complementary medium both for promotion and as a new layer of interactivity to existing video channels. In other words, in many cases, video on the Web will live alongside (if not feed from) traditional broadcasting. The pioneers and innovators are most likely to be venture funded--in one form or another.

Firewalls: Streaming video that uses the UDP protocol described above has frequently run into a technical obstacle when confronted by firewalls designed to protect the security of corporate networks. While the streaming software companies are working with the firewall software companies to allow the video streams to get access to these corporate networks, this is a process that takes time. Also, the challenges of bandwidth management have also limited streaming videos accessibility on WebTV and on some cable modem systems. This is a potential stumbling block and also a Microsoft advantage, at least for companies using Microsoft networking solutions.

Communication Design: For my money, this is perhaps the most critical issue, but it is a creative, not a technical, challenge. Just because you make something interactive doesn't mean people will want to interact. I call this "The Participation Paradox." In essence, couch potatoes are more comfortable channel surfing--and it will take more compelling interactive programs and applications to get traditional video viewers more involved in the Internet. And it is especially challenging to attract viewers in sufficient numbers to create compelling business models.

User Configuration Hassles: The necessity and array of browser plug-ins and other forms of video players both confuses and complicates the life of the viewer/user. He is used to relatively simple operations like channel switching and pushing "play" and "rewind" on a VCR. Some computer users are early adopters, but to reach a wider audience, configuring a computer to play streaming video must become much simpler. While interim solutions like automatic network upgrades are being built into newer software like RealSystem G2, in the near term, browser software configuration is more of a hassle than it should be--and sophisticated video-enhanced Web sites all need to provide support to help users with their streaming setups.

How to Make the Most of Streaming Video

While the increase in available bandwidth is beginning to alleviate these problems, streaming video for consumer bandwidths still forces some producers to make distinctions between "high motion and low motion video" that just doesn't come up when using full motion video. Unfortunately, these considerations are the opposite of what's normally referred to as "high production values." As anyone who has worked with compressed video (for example, for CD-ROMs) knows, camera motion like pans, zooms and dollies which are normally used to enhance the visual interest of a production cause low-bandwidth codecs to "choke," thus causing image quality to suffer visibly.

As a result, streaming video producers who are concerned about the low-end dial-up user's experience need to tread lightly on the compression-decompression process by (wherever possible) limiting unnecessary movement in the video frame that will cause the compression process to be more computing intensive. Talking heads, for example, are the easiest to compress because so little of the image is moving. On the other hand, movement in the background, such as people walking across the frame, and camera movements, such as zooms or dollies, further reduce the quality of streaming video. Likewise, special effects such as wipes and DVE moves are difficult for codecs to handle. Producers should reduce or eliminate unnecessary transition effects from their productions before encoding for streaming.

These inherent limitations of low-bandwidth streaming video are what makes me think the use of additional components that can be displayed next to the video should be used more frequently to create a more compelling presentation. For example, consider a distance-learning computer science course at Stanford. In this application, the lecturer is shown and heard in a small streaming video clip while roughly two-thirds of the remaining video screen "real estate" is used to display bullet-point-style graphics that complement his talk. Referred to as multistreaming or with the wonderfully oxymoronic term "illustrated video," I believe this technique can go a long way toward making the most of the new on-line video medium. In addition, many producers will benefit from using a network of resources to deliver their streaming productions rather than attempting to do everything themselves. While the market is still emerging, I believe that many "content providers" will be effectively served by the on-line equivalent of desktop publishing "service bureaus," or what might be called post-post-production houses, who will provide encoding services and heavy-duty servers on a time share basis. CNN, for example, outsources its video hosting to InterVU and, in fact, made an investment in that company.

Something To Say

Given the glut of information already on the Internet, the good news for video professionals is that there is an increasing need for talented and knowledgeable interactive producers who can turn these new technologies into viable programs. In this new environment, I think we have to avoid what my friend from the Corporation for Public Broadcasting, Ted Coltman, calls the "channel mentality." You don't need a full-time channel's worth of content to say something worthwhile. On the contrary, I think it was G.K. Chesterton who said, "I would have written you a shorter letter, but I didn't have the time." The bottom line is that it's what we have to say that will make good use of these new channels, and perhaps that is the most important consideration of all. While broadcast DTV is attempting to build its technical infrastructure, Internet video is already coming to life. For video programmers, the "bad news" may be that the Web offers a more complex creative environment than any medium that we've ever faced. The "good news" is that the Web's wired interactivity and special interest communities can lead to remarkably rapid growth, especially when the programming is truly synergistic with this vibrant new medium.

Internet video offers a fresh media opportunity that virtually all of us expect to have an extraordinarily bright future. What we make of it is up to us.

Stay tuned.

Resources

It's common sense, but frequently forgotten, that the Web itself is the most important resource for more information, especially on a technical subject such as this. Therefore, here are a few useful links. Broadcasters may want to start at "The Antenna," a site dedicated to TV and radio station sites that are using streaming video and audio: www.theAntenna.com. You may subscribe to very active Webcasting and Internet-Broadcaster mailing lists through: www.intervox.com/Webcast.htm.

Real Networks: www.real.com

Microsoft Windows Media Player: microsoft.com/windows/windowsmedia/

Apple QuickTime: www.apple.com/quicktime

Loudeye.com (formerly Encoding.com), by far the Web's largest encoding specialist: www.loudeye.com

InterVU, "the video delivery company": www.intervu.com

A comprehensive multicasting resource page: www.ipmulticast.com

Terran Interactive's Codec Central (provides an overview of different codecs and their applications): www.codeccentral.com

Duplication: The Realities Of Digital Media

By Tim Wetmore & Terence Keegan

It may seem an anachronism in this new century to talk about duplication and distribution of physical formats while television and film companies are busy making Internet distribution deals. Yet, in the here and now, the cold, hard truth is that those engagements are more about manipulation of stock prices than the realities of distributing high quality images.

Activity in streaming and "netcasting" continues to increase as the months go by, but until broadband delivery is common in almost every home and in almost every corporate and conference facility, it remains true in the year 2000 that physical media is the best route for high quality video distribution (aside from broadcasting and cable).

Of course, it's natural for the uninformed to think that since a production was done in digital video for digital (perhaps high definition) TV then it makes sense to release the program to the world, if not on the Internet, then on one of the newer digital media. In some cases this will be true, while in others it's the worst possible move. Either way, the decision on which medium to use for duplication and, thus, for distribution, should come in the early stages of planning the production itself, not as an afterthought.

Like every other part of producing video, duplicating and distributing have many hidden traps, with repercussions reaching back through the production chain. Also, invariably, it takes longer and costs more than originally thought. Proper planning will help the duplication process move quicker and smoother and will, in the long run, save money and result in a superior product.

It's amazing how several short months can make a difference in the choice of media for duplicating and distributing a video project. Not too long ago, smart money was on various tape formats, with some small applications for CD-ROM, while DVD was seen as viable for only high budget specialty work--and video streaming was laughably out of the question. Well, now it's a brave new world, baby!

DVD has arrived and in a big way. True, the most cost effective way to distribute video to the widest audience is still tape. Since this chapter is part of a DTV book, it should be noted, that there are special considerations regarding that form of distribution. If a program is digital, but not high definition, any of the standard formats will work. DVD, unlike most tape formats, does offer a 16:9 aspect ratio. If, however, the program is high def, then the picture is, shall we say, less clear. That's because tape is the best format for distributing high definition programming; in all probability it will be at least a year or so before a so-called HD-DVD standard is hatched out to support HDTV data transfer rates.

But if the higher ups are demanding digital programs that offer interactivity or, heaven forbid, surround sound, then DVD is your ONLY choice. And, with continued mass acceptance of the format after the 1999 Christmas buying season (when DVD-Video players sold for under $200) and a subsequent installed base well beyond five million units, program producers are more likely to find that people can play back DVD programs in board rooms and conference rooms and even at home. (In non-entertainment applications, some firms are finding it cost-effective to deliver to their non-wired clients a DVD player along with finished DVD projects!)

CHOOSING A FORMAT

With all this said, let's have a quick look at the formats.

CD-ROM, which was supposed to, by now, fall to DVD-ROM's domination, has been able to hang on as a viable distribution option thanks to a lack of widespread DVD-ROM development tools; end-user incompatibilities between DVD-ROM software and the multiple decoding solutions on the market; and a lingering price issue for DVD-ROM development and replication.

The installed base of DVD-ROM drives exceeds that of DVD-Video players, and since these drives are backwards-compatible with all CD formats (including CD-R and -RW), content owners continue to utilize CD-ROM as if nothing's changed. However, once again, the "pundits" are predicting a big push for DVD-ROM software in the coming months. And indeed, while development APIs and consumer hardware compatibility issues are ironed out, both Nintendo and Sony plan to release their next-generation, DVD-ROM-based videogame console systems this year--which will spur the PC gaming industry to exceed the console offerings with DVD-ROMs of its own. This, in turn, will work wonders in advancing DVD-ROM development operations and driving down costs.

Still in all, CD-ROM will remain a safe bet for the years to come--that is, if video quality is not an issue. If you are producing a linear program for general distribution, then your medium should be videotape. If you decide to do this, then you should take the highest quality digital videotape you can get from your post facility and hurry to your local video duplicator (most of them advertise in the phone book or can be recommended by your post facility).

If, however, you insist on the optical format, there are now three types of masters you can provide to your replicator: several CD-Rs (obviously not a blank one); DVD-R (which is now available in 4.7GB form); or DLT--Digital Linear Tape. DLT is by far the most preferable format and has become the defacto standard for optical video replication; some replicators are testing mastering from write-once DVD-R discs.

Disc replication can be the source of some controversy. There have been cases where replicated audio discs or CD-ROMs have not been up to the standard expected by the title holder (in this case, you). Fingers begin pointing at this stage. Some maintain the replication wasn't done right, others insist the mastering process was improperly handled while still others will say that the many phases of encoding/decoding through the digitization process from editing to final master to DLT to downloading the information to the LBR (laser beam recorder used in optical mastering) causes glitches.

After years of argument on this (bear in mind the number of problem discs compared to the billions that have been made is minuscule) someone decided, "Hey, what about the players?" An extensive, five-year survey was done to determine where problems come into the process of released discs and in over 90 percent of the cases, the problems were deemed to be in the players rather than the process. So the process for replicating a CD-ROM won't be your problem, while schedules and your perception of what the visual quality should be, are the likely problem areas.

Every once in a while, a DVD disc will still trigger a compatibility issue with a particular DVD player/drive. Again, this is usually due to the player manufacturer interpreting the DVD specification differently than the authoring hardware/software supplier. The rule here: QC early, and QC often--the longer you wait to check, the costlier the fix will be. Authoring houses, replicators and several dedicated quality-assurance firms are equipped to perform playback tests on a bank of DVD-Video players and -ROM drives.

THE REPLICATION PROCESS

Briefly, here is the process for making a CD after the DLT is delivered to the replicator. Any decent post production facility should be able to provide you with a DLT for duplication, in addition to any other formats you may wish.

The process described hereafter is the same for both CD-ROM and for single layer DVD discs, with a caveat. DVD can be single layer, one side, like CD-ROM, but after that everything is different. DVDs can also be single sided, dual layer; single layer, dual sided or dual layer, dual sided. The replication process for all of these versions of the DVD format has some very significant differences from CD-ROM, though fundamentally we are talking about forming pits in a plastic substrate and coating it with a reflective layer for subsequent readout.

But more on DVD later. For now, here is the basic process. A special glass is used (it's called float glass and is the same as that used in the great, glass office towers in big cities. This crossover, in fact, is one reason why replication costs have come down, since the building boom in recent years calling for the glass has allowed its price to come down to all customers). The glass disc is chemically and physically cleaned and then polished with a rare earth compound to remove anomalies, allowing for a perfectly smooth surface on which to spin coat the photoresist.

This process occurs in an inline mastering machine that holds up to five cleaned and polished glass substrates. Each glass disc is placed on the spin coater and rotated at a predetermined speed while the photoresist is dispensed. The thickness of the photoresist is very precise and is confirmed by the machine. The coated glass then moves on to master recording.

Here's where the LBR comes in. The laser beam recorder uses as its data source the formatted DLT and records the information onto the glass master using a laser with a certain focal length. The information is recorded onto the glass master by modulating the laser to create exposed areas in the surface of the light sensitive photoresist coating. The length of these exposed areas is determined by the various "books" specifying

CD-ROM formats.

The glass master is rinsed with a developing solution and the polymerized areas of the light sensitive film that have been exposed to the laser are removed by the solution. This results in digitally encoded pits in the surface of the photoresist. The developed master is placed into a sputtering machine and a thin film of conductive metal is deposited onto the surface of the master and is then ready for electroplating.

Next comes what's called Galvanics which means the glass is dipped in a plating solution with a current running through it, and after a pre-determined time the current is turned off, the glass removed. This layer is then separated from the surface of the master and the encoded layer of pits in the photoresist have now been replicated in a series of mirror image bumps. This encoded layer is called the stamper.

After further, careful preparation, the stamper is put into the molding machine where polycarbonate resin is injected into the mold cavity. The melted plastic flows over the bumps, replicating them as pits, exactly like those in the original master. The replica is then transferred to a sputtering machine where a reflective metal coating is deposited onto the encoded surface.

That's a very brief, oversimplified description of how optical formats are replicated. As mentioned before, with DVD there are numerous complications which make it very different from CD-ROM replication. It would take most of this book to fully explain these differences; more importantly from the program producer's point of view, the critical difference comes in the DVD specification, i.e. compressed MPEG-2 video and Dolby Digital six channel surround sound.

The implications for this are vast, covering everything from the post production process to the replication procedure. These factors can, and should, influence your choice of release format. What this means is that if you are very concerned about video quality, then you will need an experienced, talented compressionist working on a state-of-the-art Variable Bit Rate (VBR) compression system and then it will have to go through a Dolby Digital encoding workstation (it should be noted that stereo and MPEG-2 audio are also part of the DVD spec, although the overwhelming majority of titles released in the United States so far have been Dolby Digital six channel with stereo capability).

One important factor is allowing enough time for compression since the VBR approach means your compressionist will make anywhere from four to 15 passes of the entire video program to get it just right, and will have to take time to make artistic decisions, probably with some discussion from the director, producer or other program representatives. You can imagine how long something like that might take.

A quick explanation of VBR: let's start with the "bit budget" concept. A single layer, single sided DVD has a total capacity of 4.7 gigabytes--that's your total budget. All the data has to fit into that size "bucket," including any and all audio channels. Since some scenes can be compressed more than others without showing the ill effects of MPEG compression (called artifacts), a compressionist will allocate a larger portion of the bit budget to some scenes and less to others. Scenes with great detail, lots of action and deep color density cannot stand a lot of compression and thus are allocated a greater number of bits than are static, talking head type scenes that can be compressed with few ill effects. The compressionist will, therefore, vary the amount of bits according to the scene, thus the term Variable Bit Rate. Part of the total information to be budgeted must include the data dedicated to navigation.

Another little wrinkle here. Does the duplication plan call for European distribution? If it does, remember that PAL has a higher resolution than NTSC and will therefore use up more bits. Since the bucket is only so big, that means more compression. Is that going to be acceptable? Better think about it. Also, if you know you are going into Europe or you have a complex program, ask your replicator well in advance about Dual Layer DVD with its greater capacity. The authoring process is more involved and the schedule will tend to stretch out. (DVD-18 is also now available in limited quantities--offered only--as of February 2000--by Warner Advanced Media Operations (WAMO) in Olyphant, PA.)

Now comes the fun part--the authoring. This is much more involved than with CD-ROM, and for a number of reasons. These include the multiple channels of audio, the compression and the sheer amount of data stored on a DVD when compared to videotape or CD-ROM (seven times as great). This means the branching and menu structure are much more complex than anything you've seen in any format before.

Given this information, let's go back to what we looked at in the beginning of this chapter: the complexity of the format means you are going to have to pre-plan in the production phase for your replication stages because all of this will end up taking much more time than originally allotted. It won't work to hand off a digital Betacam master or even a D-1 to a dupe house and say, "Run me 500 copies (1,000 or a million, even)." Instead of taking a couple days, it will take six weeks! No joke. The large facilities, like WAMO, recommend you allow no less than six weeks for the process.

They also recommend a D-1 master, or DLT. Some replicators might soon suggest you use DVD-R, a write-once format that recently stepped up to a 4.7GB capacity over its former 3.95GB limit.

SHORT-RUNS

Insanely cheap prices for writers and blanks over the past two years have given rise to the proliferation of CD-Recordable (CD-R) technology everywhere from home offices to content development firms of all types. Thanks to improvements in duplication speed and disc printing, a "CD-R duplication" sub-industry has blossomed, making short-run jobs (from 50 to 1,000) on CD-R a viable alternative to more expensive "pressed" discs.

Replicators will now urge that runs of under 1,000 units for any CD format be completed on a CD-R duplication unit (basically a bunch of high-speed CD writers daisy-chained to a master -ROM drive, which takes a CD-R for its master). Glass mastering, the most time-consuming and costly of replication processes, is hence eliminated.

Along the same lines, DVD-R duplication is emerging--many of the CD-R duplication systems can accommodate DVD-R drive upgrades. Relatively stiff drive and disc prices (around $5,000 and $50, respectively) will continue to block widespread DVD-R duplication, along with lingering copyright protection issues still to be implemented in DVD-R's format specification.

DVD: AUTHORING IS EVERYTHING (ALMOST)

You can't overestimate the importance of the compression/authoring process. If you've chosen (or been forced into) the DVD format, it was probably because of the menu-driven navigation system native of the format and the relatively high visual quality of the medium (better than VHS). Either this interactive capability, or the fact that DVD is a bit bucket larger than any other and "marrying into the family" of DVD technologies may be an intelligent long-range strategy. The "family" means the various iterations of the format that are on the way. We have included a little glossary of the terminology often used when people refer to DVD that hint at the breadth of this family. Advance work is already being done on higher density storage that could bring the bit count up to as much as 50 GB per side, if manufacturing methods can be made more precise and the oft-hailed blue laser technology matures quickly enough.

In the meantime, DVD is a good choice for replication and distribution, if you really need the interactive menu system and you don't want to distribute to a huge base of users (though at the end of 1999 total worldwide DVD players had passed 5,000,000 units shipped and was on an upward swing--still nothing when compared with the 800,000,000-plus VHS players worldwide). The number on DVD-ROM players is much more elusive, but generally regarded to comfortably exceed DVD-Video's numbers, thanks to major computer manufacturers' integration of DVD-ROM drives as standard issue (or a cheap option at the worst).

If you want the best compromise between video quality and wide distribution, then VHS may be the ticket and the tab for the duplication will be much smaller, though the interactivity level of VHS is limited to how often you can rewind and fast forward. And this takes us back to deciding on your replication/duplication medium during the planning stages of the production. If it's not an interactive title at all, VHS will do you fine. If interactivity is important and video quality is not, then CD-ROM may work. If you want the best of both worlds and can live with a limited audience, then DVD is your jam.

figure 2

[an error occurred while processing this directive]


Back to the Table of Contents
Back to Digital Television's Home Page