The Technologies, Aesthetics, Philosophy and Politics of High Definition Video
 Terry Flaxton

In April 2007 at the National Association of Broadcasters convention in Las Vegas, High Definition changed forever. Whereas previous HD cameras had cost half a million dollars, Jim Jannard, a sunglasses manufacturer from Canada, managed to develop a new camera, called the ‘Red One,’ retailing at $17,500. This development signaled a change in the production of High Definition that was announced through its initial naming. The titling – ‘High Definition’ – was meant to align the new technology with film, giving it more of a sense of quest than analog video, more of a sense of flight, a sense of the arcane, the hidden, thus producing something to aspire to and engendering a sense of being elite, in turn, evoking some of film’s prior sense of mystery.


I am going to explore the rapidly changing face of HD and its impact, in terms of technical, aesthetic and societal perspectives.


I want to introduce an image that may be useful when thinking of HD: as the light falls at dusk and you are driving along, you might notice that the tail lights of the car in front of you seem much brighter than in daylight, and the traffic lights seem too bright and too colorful. The simple explanation for this phenomenon is that your brain is switching between two technologies in your eyes: the rods (inherited from our distant ancestors), which were evolved for the insect eye to detect movement, are numerous at around 120 million. Through them you see mainly in black and white. The second technology is much more sensitive to color: these are the cones, which are far less numerous at around 7 million.

                Color is a phenomenon of mind and eye - what we now perceive as color, is shape and form rendered as experience. Visible light is electromagnetic radiation with wavelengths between 400 and 700 nanometers. It is remarkable that so many distinct causes of color should apply to a small band of electromagnetic radiation to which the eye is sensitive, a band less than one ‘octave’ wide in an electromagnetic spectrum of more than 80 octaves.

                Human trichromatic color vision is a recent evolutionary novelty that first evolved in the common ancestor of the Old World primates. Placental mammals lost both the short and mid wavelength cones. Human red-green color blindness occurs because, despite our evolution, the two copies of the red and green opsin genes remain in close proximity on the X chromosome. We have a weak link in our chain with regards to color. We are not 4 cone tetrochromats; we have three and in some cases only two - in extremely rare cases we have one!

So, there are two technologies – rods and cones – between which there is a physiological yet aesthetic borderland. Keeping this idea in mind, if we apply the potential misreadings of eye and mind not to color, but to our ability to recognize different resolutions, then a similar potential sensorial confusion is possible in HD. In my own experiments with capture and display, it is becoming apparent that a viewer experiences a sensation similar to the illusion that there is more color at dusk when a certain level of resolution is reached. At that borderline between the lower resolution and the higher resolution, a fluttering occurs between the effects of both. I have found that at the lower level there is less engagement, as measured by the duration the audience is willing to linger with an image, and at the higher resolution there is more engagement. This is evidence of a switching between two states in the suspension of our disbelief – with high definition eliciting more visual fascination. What is really interesting to me, as an artist, is the boundary between the two states.



After the invention of television, it took many years to be able to record the analog video image. This was finally accomplished through creating a scanned rasta of lines and inscribing what information was present in each line. This was the strategy of analog video in its two main forms: PAL and NTSC. When computers began to take over, scanning became obsolete (having only been necessitated by the state of magnetic control of electron beams and glass technology at that time); so a form of inscribing and recording the information that was independent of the scanned rasta but was grid-like – digital in form and mode - took over. This became the now familiar grid of pixels that every camera sensor has. A sensor is like a frame of film in that it is exposed in one go, unlike a scanned image, which takes time. But there are many issues with the technology that make it unlike film (like needing to empty a CCD of charge line by line or a CMOS chip in one go).

                  Digital video was the transforming technology that moved us closer to high definition technology. It had 720 x 576 pixels to emulate the 625-line system in analog video (in PAL at least). The earliest forms of High Definition were analog, but being on the cusp of the digital revolution, HD soon became digital. Also, the early European systems were being financially trounced by the Japanese and American systems and so the standard became 1920 x 1080 pixels.

Standard HD is known as having 2k resolution because it has a resolution of 1920 x 1080 pixels. This has a 16:9 or 1.77:1 aspect ratio, which is common to LCD, LED and plasma television design (though most broadcast TV works at around 1280 x 720 pixels). Cinema Style HD is termed Electronic Cinematography – it is also 2k but has 2048 x 1080 pixels. This has a 2:1 aspect ratio. There are various definitions of the amounts of pixel in an electronic cinematographic image – agreements still have to be made as to exactly what numbers are involved. There is also one other important difference between Standard HD and Electronic Cinematography: in Standard, the data recorded is more true to the original moment of capture, as it is to a degree processed in camera, while Electronic Cinematography is processed mainly in the post-production house. Beyond these 2k formats, 4k is 4096 x 2160 pixels (2:1), and 8k is 7680 x 4320 (16:9) - this is NHK's Super Hi-Vision. 


NHK, or the Japan Broadcasting Corporation, recently conducted an experiment through linking a prototype 8k camera to 18 one-hour data recorders. The subject of the test was a car ride lasting 3 minutes.  In order to capture it, the SR data recorders ran so fast that they went through one hour's worth of recording during the three-minute shoot - all 18 of them. The resolution of the projected image was immense: imagine a normal computer display set at say 1280 x 1024 pixels expanded to some 27 feet long. The technological moment had echoes of the Lumière brothers’ screening in January 1896 of a train arriving in a station. At the NHK screening, the Japanese audience were reported to have found the experience so overpowering that many of them experienced nausea.  Currently we can place a computer image on a standard screen of 27 feet, given that film has been displayed for many years in cinemas at this kind of resolution – so, imagine if the density of pixels were then displayed across that screen – the possibilities of deep engagement and belief in the experience seem to have lead to a physiological reaction.



Any serious understanding of High Definition technologies requires a basic understanding of 'compression.' Light is focused through a lens onto a charged coupled device or sensor, which then emits electrical impulses and thence data. Very early on in video production, a question arose for designers when far more data than was recordable was originated through this process. It was from this problem that the idea of throwing 'unnecessary' data away took hold. This method continues today: a contemporary HD camera, like the Sony HD750 or HD900, doesn't record 500 of its 1920 pixels - it just throws them away. The problem seen by some manufacturers is that too much data has been generated and so compression appears necessary and in the worst case, information must be thrown away. But other manufacturers lead by Red, Arri, Panavision and now Sony too, realize – mostly through pressure by the cinematographic community - that we must keep all of the data in order to properly describe what is in front of the lens.

An accumulation of data is a representation of the original and all representations have levels of veracity. Most of today’s HD cameras use a software technology based on Jean Baptiste Fourier's Discrete Cosine Transforms (DCT's), which breaks up the image data into tiles, so that each can be treated independently. Recently though, we have seen the arrival of Fourier's Wavelet Transforms. These theories were in place by 1807 but not truly understood until about 15 years ago. Wavelets have helped prise open a Pandora’s box:

"Wavelets are mathematical functions that cut up data into different frequency components, and then study each component with a resolution matched to its scale. They have advantages over traditional Fourier methods in analyzing physical situations where the signal contains discontinuities and sharp spikes. Wavelets were developed independently in the fields of mathematics, quantum physics, electrical engineering, and seismic geology. Interchanges between these fields during the last ten years have led to many new wavelet applications such as image compression, turbulence, human vision, radar, and earthquake prediction.” - Amara Graps, Astrophysicist


Discrete Cosine Transforms are a sort of ‘one-size-fits-all’ response to data – a thuggish response requiring intensive computation. This is in contrast to Wavelet Transforms, which interrogate the data coming through them and find the best response from within their algorithm. In effect they intelligently address the data to get the best out of it, while using less computational power. As one Director of Photography put it on the Cinematographers Mailing List:  "Ummm, wavelets good, DCT bad."

Contemporary cameras and post-production systems have been designed with DCT's in mind, and the manufacture of the relevant devices, cameras, proprietary editing and storage systems has been designed and marketed to recoup the large amounts of costly research that has been expended by big corporations.  It is simply not in the interests of the bigger corporations to switch over to the new, more elegant technology - yet.  The pressure exerted by the maverick Red Corporation has already had telling results on corporations like Sony, who are now marketing their flagship camera: the F35.


A pixel is effectively a packet of data that is represented on screen by a changing luminosity and color identity. The more pixels, the better.  Currently, the highest form of HD image capture requires a hard disc - and not just any hard disc, but a Redundant Array of Independent Discs - a RAID.  The only exception is Sony's SR deck, which records data on tape. So what's a RAID?  If I throw you a ball, you might be able to catch it. If I manage to throw you 20 balls at the same time, you have no chance.  If I throw 20 balls at you and another 19 friends - you have a chance of catching them. A RAID Array uses a group of discs to catch large amounts of data.  If you want to record 1920 x 1080 pixels uncompressed and with their full complement of data, then you need read and write speeds of over 440 Megabytes per second. So where does compression end and aesthetics begin?

Currently, film DP's are still brought in to light HD productions, but they’re as yet unfamiliar with the technology, and often electronically trained people are brought in to hold their hands; the common ground between the two is that preserving data is all.  At a recent meeting of the British Society of Cinematographers, there was much wailing and gnashing of teeth as film oriented DP’s stressed their concern over the lack of dependability of the production chains that eventuate in an image. It was argued that is currently possible to send your data through the same equipment at two different facilities in the same city and obtain different colorations of that image. It has taken 100 years within film to obtain a dependability in the chain of production and of course the ability of a cinematographer to get that little bit extra, that indefinable advantage in their image making is what adds value to their reputation – however, at the moment, the terrain of HD production is almost feared because that control has yet to be realized.

Within contemporary cinematographic aesthetics, whether in film, analog or digital video, or electronic cinematography, there are a series of tactics to 'say something' with light. These tactics if listed become mundane: a warm look for safety and comfort, blue for threat and alienation - and so on. There are DP's like Vittorio Storaro, who shot Apocalypse Now, who step outside of these prosaic color values. Whereas Storaro worked with color and light and the physiology of light enmeshed with the psychology, Conrad Hall (American Beauty and Day of the Locust) worked in another area. His inventiveness and commitment was to the photographic within the cinematic arts.  As Hall traversed the boundaries of contemporary wisdom about what constitutes good exposure, he influenced a whole generation of DPs. Hall knew that the still image captures something that cinematography rarely does; he was therefore concerned with finding the photographic moment amidst the flow of images. He tried to find the extraordinary within the ordinary. In this quest, Hall pushed film exposure to the limit; this kind of treatment would be ruinous to HD because it does not enjoy the exposure of highlights as film does. Currently, High Definition cannot match film in terms of its exposure latitude.

On a Ridley Scott set in 1983, as he shot the famous 1984 Apple Mac commercial, I was shooting the “making of” material for Apple. At that time, it was not possible to see what the kind of images that you were obtaining via the medium of film. For that reason the cinematographer, through experience, would be one of the only persons on the set who knew roughly what they would be getting on film. As we were viewing back our rushes on our production monitor, checking focus and exposure, I became aware that about 20 people were standing behind us, quietly looking over our shoulders. Usually the film rushes would come back the next day to be viewed by the more select in the hierarchy. The two groups stared at each other - two alien tribes at war – film and video. But, this was a film crew that had never before seen what it had been shooting at the same time as shooting it. One of them grinned in pleasure at seeing our footage and suddenly, like the German and British troops in the World War One downing their rifles on Christmas day and playing football together, suddenly we were friends. From then on they stopped being hostile to us, even sometimes offering to move lights to let us have some illumination - bearing in mind that lights are sacrosanct in film.

So, historically, in the clash between film and video, the film users were seen as artists and craftsmen and video users were seen as being artless - video was obtainable and without atmosphere, film was arcane, it was a quest in itself, it had kudos.




In film production, because film stock and lenses have become so sharp, in order to impose atmosphere cinematographers have had to constantly distort the color standards and definitions of film stock. 'Atmosphere,’ like popcorn, shares a quality that allows the easy suspension of disbelief. If film manufacturers say that development should occur at such and such a temperature, then heating up or cooling down the developer is a means by which the color, grain or exposure may be changed.

Here is the rub for HD: to get a look from a clinically clean medium you have to distress the image and therefore lose data, and as we've established, DP's really don't want to distress an image that is already distressed by being compressed.  If you do work on the image in camera, as the traditional film DP does, then you limit how much data is recorded - you have to work in the color matrix. If you crush the blacks to get a look you automatically reduce the data that is output into the image display.  So current HD practice is to do very little in camera, so that every bit of data is carried back into post-production, where the work on the image - the grading - can begin. But I contend that when you really look at images produced like this, you'll see a thin patina over the image and the 'look' itself is not inherent within the image. I've spent 30 years shooting video, as well as film, and I know it's possible to generate the look within the image. It is my contention that simply to light well and to leave everything to post is an abrogation of the DP's responsibility as a creative artist.

Original electronic imaging was analog in form – as was film – yet the formulation of the capturing of an image was different from film.  Film has wide latitude – one can make an intelligent ‘mistake’ and rework the material and formulate a sense of ‘atmosphere’ within the image. This is commonly known as ‘the Look.’ Analog video is clean and clinical, and you have to get the exposure right – in the early days, if you didn’t get exposure correct, then you didn’t get focus. Color itself was grafted onto an already set formulation of image capture. I shot one of the first features, generated on video and transferred to film for theatrical distribution; this was Birmingham Film and Video Workshops production Out of Order. I approached the task by imagining video as being like a reversal stock – with very little latitude for mistakes in exposure. The transfer to film was adequate, but when compared to today’s digital transfer techniques, it was not good in terms of color.

                With the advent of Electronic Cinematography (as distinct from High Definition video, which is an extension of digital video) something very important has happened with image capture. In both photochemical and electronic cinematography, until the image is developed, the image resides in latent form in both the silver halides and the un-rendered data. Development – the bringing forth of an image in film – is similar to the rendering of an image in the digital and electronic domain, and importantly, color is within the bit-depth of electronic data and is therefore an integral part of its material form. This developing practical understanding in the professional realm is counter to arguments that circulate within media theory. For instance, New Media: A Critical Introduction claims there is an essential virtuality to new media where the precise immateriality of digital media is stressed over and over again. However, industrial and professional expertise now challenges academic convention by seeking to re-inscribe digital image making as a material process.




In my own practice I have often been inspired by the simple act of making work with such wonderful technology. This technology functions faster than the eye or mind. Even analog video takes one 64 millionth of a second to 'write' a line.


Duration is to consciousness as light is to the eye. – Bill Viola


Viola is proposing that the presence of light is what caused the eye to evolve, and in turn, that consciousness evolved to deal with things that were more than momentary. He proposes that in a medium where time is an essential factor, waiting reveals so much more.

Viola's roots lie in both the symbolism of Renaissance painting and the Buddhist proposition of Dependant Origination - that everything can only arise in relation to everything else. My own roots grow out of the moment that I realized that all things record an image through duration: from a lowly rock which, if left shadowed long enough, records an image; to paper that has a leaf left on it in bright sunlight; to celluloid that holds a coating; to tubes, chips and sensors that react to light.




My first encounter with video tape was in 1976 with 2-inch analog quadruplex, where one took a razor blade and cut it, just like film, then spliced it together to make an edit. Then re-recording came along, and we set up machines to record the next bit of video in line - thus creating an edit and image deterioration.

Around 1982 I was managing a video facility in Soho called Videomakers. The owner of the studio was the son of an electronics inventor and watched while we tried to accomplish a simple dissolve between one image to another for a piece of art I was making. Unable to contain his excitement, he told us that his father had successfully harnessed a computer to 'revolve' a still image. “With a little bit of development the image could be refreshed 12 times per second – so, by doubling and then interlacing by splitting the image into odd and even lines, a whole second of video and henceforth a moving TV image could be revolved.” In this case, through a sole inventor and not a corporation, we groped our way through the late analog age and into the early neo-digital. Our main concern was how to adjust our thinking processes to cope with the new paradigm: the fact that with a digital event, one had something that could be infinitely manipulated, and therefore one could systematize the process - thus giving rise to 'the operations' as Lev Manovich has termed them.

Though every video artist has enjoyed the accidents that have come about through stressing the parameters of low definition equipment, HD offers a different kind of unveiling of form: image capture can be achieved without necessarily stressing the media. This then prompts questions about the aesthetics of HD. Given that a primary ingredient of the artist’s palette is to find surprises within the medium itself, what new strategies can the artist or practitioner use to unveil a deeper insight into content? Though McLuhan tells us this should not be so, could the message HD delivers be the beginnings of transparency?

To return to Viola: "Duration is to consciousness as light is to the eye." But High Definition can deliver not just duration but articulation. So we might now remember how increased resolutions could affect how and what we see and therefore re-state his observation like this: "Definition is to consciousness - as luminosity is to the eye."




In 1987, John Wyver carried Walter Benjamin's 1936 ideas forward – with the help of Jean Baudrillard and Paul Virilio – in his program L'objet d'art a l'age electronique broadcast on the satellite station La Sept. He asked: "Can a reproduction carry any of the authenticity of the original?" At that time the world was concerned with analog representations, which decay in their passage from copy to copy, from medium to medium. If one proceeded with digital compression using Fourier's earlier mathematics, then Benjamin's question might unveil a buried insight: To copy is to decrease. With digital copying this might still ring true - not only because things are changed and lessened in the act of copying - but because there is a sense in which the representation itself is simply a borg, a copy without feeling, without the 'true' sense of the original.

Over twenty years later, the spirit of the question still stands. Where are meaning, significance and value in the digital domain, given that the medium of reproduction and the medium of origination reside together in the same realm? Has the idea that things can be 'derivative' become defunct?  Is not everything both derivative and original at the same time? Is the idea of an 'original' anachronistic?




As there is a blurring of the lines between form and content, so there is between software, hardware and that nether region of firmware, which tells hardware to be something - rather than do something. Now, through a combination of the use of the net and digital media, a new kind of aesthetic is becoming available. Herman Hesse predicted post-modernism and its bastard digital child ‘convergence’ in his 1943 work The Glass Bead Game. In the game itself, one might take a bar of Mozart and place it next to a brushstroke by Matisse, a line of poetry by Omar Khayyám and a silk screen by Warhol and so create a new work of art. Here, derivation is all; in fact it's been canonized. Hesse proposes the notion that authenticity is not only present in the copy but that the two are one and the same – that the artworks weight accumulates with the weight of the addition of other copies and their imbued authenticity and all combine together into new, authentic works of art. In pursuit of this aesthetic conglomerate, the actions of the new technologies and the way the technology is being innovated has, itself, become a developing aesthetic.




To return to where I began, on August 31st, 2007, when Jim Jannard and the Red delivered their first complement of 25 Red cameras to a selected few, they set the world alight with their offer of cheap and high level wavelet technology and made it available faster than any previous technological advance of this order. The introduction of 4k was a moment of industrial re-organization.  This new technology allows new people, who previously would not have had the opportunity, to enter into the industry at a high level. This shift in the industrial hierarchy is part of a cyclical phenomenon that comes in waves about every 5 years. Overtly it looks like a change in technology; covertly it's a change in employment functions. In the end, 4k is as relevant as everything that follows it - 8k, 16k, 32k, 128k - up to the Data rate of the dominant hemisphere of a moderately intelligent person at 1GB/sec. I quote Scott Billups on this.

Crucially though, this development of User Generated Technology came out of an individualist trend that has somehow remained alive through late capitalism: About five years ago, Jeff Krienes, a Director of Photography in California, was experimenting with a friend from Thomson Grass Valley on a prototype HD Camera. They'd become fed up with the slowing of technical innovation emerging from the big corporations, so they tried to create a camera that fulfilled not only their needs but their aspirations. They made an aluminum case that contained some electronics and a few chips, had a fitting on the front to take a 35mm lens and, on top, the stripped down carcasses of 20 iPods to RAID record the high data output. This camera had nearly the same specifications of the Red Camera. Though Red may look like the trailblazers, they are in fact the inheritors of a User-Generated, YouTube-like attitude to the production of technology.

What this means is, from the early sole inventors in Fourier's time, we have just been through a long period where corporations, from the late analog to the neo-digital age have controlled the means of innovation. But on entry to the meso-digital age, access to high-level technical innovation is now again available to the individual - and this apparent individual engagement with technology (besides being the apogee of the celebration of the geek within) is a hallmark of the web2/digital era, and this trend is currently centering on the production of High Definition Technology.

The commonality of information available through the web is also allowing a commonality of aspiration so that the User, and now the Doer, is also the Maker and the Knower of their own world. As we make the transition between old and new states, a fluttering is occurring, a switching between the two states in the suspension of our disbelief. Through these changes, the definition of the self is expanding - the concept of the individual is being re-defined as it is being up-rezzed to a higher level of definition.