The Digital Camera in Motion Picture Film Production
by
Gary Slayton and Greg McMurry
©1998

To proceed effectively with the topic of Electronic Cinematography, we must determine the current availability of components that could be used in the system. In addition, there are many camera issues that must defined and researched. This first research phase will be dedicated to the details concerning the actual camera imaging device and image format.

We have determined the following issues as significant with regards to discussions concerning the use of a digital motion picture camera.

  1. Terminology issues-
  1. Resolution- The desired resolution of the system is 3000 wide x 2000 high. The current film chain often produces projected resolutions as low as 900 lines. 35mm color negative is around 2000 to 3000 lines. In order to capture images for use in production we must have a sensor of at least 3000 pixels minimum horizontally to get about 2400 lines of useable resolution.

  2.  

     

    Development of high resolution CCDs is limited due to the small number of users. The main market for such devices is on telescopes. Because the CCDs are used on telescopes, the emphasis is on large pixel full frame sensors with low noise that are clocked out at very slow rates.

    All the CCDs in the table above are full frame. Frame transfer and interline transfer devices are at this time limited HDTV or lower resolution. We can see that Kodak, Fairchild and Philips have offerings that have sufficient resolution for our needs.

  3. Color Depth- Color depth of the system will be 12 bits. The number of bits used to digitize the image should match the dynamic range of the signal produced by the CCD. In our case we are looking for a dynamic range of about 72 db. Therefore we are going to need an A to D that produces 12 bits. This is going to be a problem. 12 bit A to Ds are not fast, so we may have to consider some "balanced parallel" designs. For example we may need to use several A to D converters in parallel where one A to D samples the signal while the others are converting.

  4.  

     
     
     
     
     

  5. Aperture- The system should be usable with existing 35mm film lens systems. The best we can do is to attempt to find a CCD with a image area close to the coverage of an existing film lens series. As an example suppose we select the Kodak KAF-6300 CCD. The horizontal size of the CCD is 27.6 mm (1.08"), this is a good match for full silent aperture of a 35 mm film at 25 mm (.98") of image width.

  6.  

     

    The other choice is to use some relay optics to alter the format of the lens used. This is not a bad way to do some fine adjustment to the size of the image plane. However, if the change is large as in 35 mm to 2/3" video format, the cinematographer must now adjust for stops and depth of field.

    In any case we should attempt to keep the pixel size above 5 microns because lens quality will become an issue for small geometry pixels. At large pixel sizes, p>15 micron, we have the problem that the image size will fall between 35 mm and 2 x 2 format requiring substantial relay optics.

  7. Transfer Rate- We need to find or build a new CCD that is both "big" and "fast". This is the single biggest issue we have with the current CCDs on the market. Regardless of the manufacturer, we are unable to buy a CCD that will produce a data stream that will allow even 24fps at 2k x 2k. So, first look at the way data is transferred on the current CCDs.

  8.  

     

    In clocking data out of a CCD, the pixels pass down through each pixel site and are transferred to the horizontal register one line at a time. The line is clocked out of the chip at the CCDs maximum clock rate. This clock rate is the limiting factor, typically 1 to 20 Mhz. If we are clocking out 3000 x 2000 pixels at 20 Mhz, each line takes 3000 x 1/(20 E6) seconds = 150 us / line. Then 2000 lines x 150us = .3 seconds per picture or 3.3 fps without even including vertical transfer times of 5 to 15 us/line and exposure. Some CCD experts can manage to over clock the horizontal output register by 2 to1, but 2 to 1 is not enough speed for our needs and the signal to noise becomes worse during over clocking.
     
     

    Fps
    # of pixels
    # of h registers
    data rate for 12 bit data, bits/second 
    1
    6,000,000
    1
    72,000,000
    10 
    6,000,000
    1
    720,000,000
    24
    6,000,000
    1
    1,728,000,000
    48
    6,000,000
    1
    3,456,000,000

    The modification required, is to increase the number of horizontal output registers to the point that we can get the image out fast enough for 24 to 48 fps. At 48 fps, we need a data rate of 48 x 3000 x 2000 = 288 Mhz horizontal clock. This means something like 30 horizontal registers running at 9.6 Mhz plus the vertical transfer time and exposure.

    Needless to say we must still overcome the issues of matched output amplifiers or multiplexing and a single amp plus the age old problem of how digitize and store the data.
     
     

  9. Sensitivity- We must match popular film stocks. Different manufactures of CCDs use different schemes for specifying the sensitivity. None of the data given is in the form of ISO / ASA. We can evolve the formulas to convert from the data given by a manufacturer to ASA. I think it is best to wait until we select a vendor to do this. As a guide line for our thinking, it is a fair to say that in the linear portion, we can expect that a large pixel, full frame CCD will produce sensitivity at reasonable signal to noise comparable to existing film stock of about ASA 100.
  10. ISO / ASA - Calculating ISO / ASA Speed for Digital Cameras ISO standard 12232 (in final draft) defines a speed for digital cameras similar to film speed. Unlike film speed, it is not a single number but a base speed and a speed latitude given as upper and lower limits. Film may be show at different speeds but pushing too far degrades colorimetry and increases noise. Digital cameras maintain good color when their speed is pushed, only increasing their noise level.

  11.  

     

    For this reason, the base speed fir digital cameras is defined as a speed which results in a given level of noise in the image (lowest light resulting in a ‘excellent’ image. Speed latitude consists of two more measurements, the lower being the saturation limit of the sensor, and the higher being the highest speed which results in an ‘acceptable’ image. Signal to noise is an acceptable image is taken as 4 times that in the excellent image.

     Saturation Speed

    Saturation is defined with a 1/2 stop headroom (41%) for highlights, and references an 18% reflectance test card for actual measurements. ISO saturation speed = 78 / (lux X time) where lux is measured at the sensor, and is the lowest value which will not saturate the sensor or image processing.

    The Kodak KAF-6300 CCD has a saturation signal of 85,000 electrons and a quantum efficiency of 30% at 550 nm (saturation is expected to occur in green before red or blue). 0.28M photons are required for saturation. This is an energy of 1.01 E-13 (watt - sec).

    (photon energy = (planks constant * speed of light) / wavelength)

    At 550 nm, there are 680 lumens / watt or 680 lux / (watt - meter 2).

    Photosite area is 9 microns square or 8.1 E-11 meter 2.

    Lux required to saturate with a one second exposure is 0.85.

    Substituting this into the ISO speed above gives a speed of 92. The specification rounds calculated values between 80 and 100 to ISO 100.
     
     

    D i s c l a i m e r s

    This calculation is only approximate as the illumination specified in the standard was not used. This information along with calculations required for noise based speed are in other ISO standards which are not available to us at this time.

    Other standards referenced by the speed standard are:

    ISO 554 : 1974

    ISO 2721 : 1982

    ISO 7589 : 1984

    ISO 14524

    ITU-R BT.709 : 1993

    Because other speed measurements are noise based, they can only be estimated without actually constructing the camera.

    Both noise based ISO speeds will be higher than the saturation speed calculated here.
     
     

    Shutter Angle- For the purposes of simulating the motion blur that we associate with motion picture production, we will have to simulate or duplicate a conventional camera shutter in our SS Production Camera. If we use full frame CCDs, we will need a conventional shutter. When the data is shifted vertically through the pixel sites it will produce smear if we do not prevent light from striking the pixels with a shutter. The good thing about this is that to film people, the motion blur will be identical to existing film cameras.

  12. Signal to Noise- The camera’s signal to noise should match film. The term used in CCDs for signal to noise/contrast is dynamic range. Dynamic range is the ratio of full well output signal over dark noise of the CCD. The value is most often in terms of db. The CCDs we are looking at have typical ratings of 72 db. How does this relate to film? I have heard that the film chain has 7 stops of latitude and the video chain has 5 stops. One would assume then that a film system has 4 times the dynamic range of video.

  13.  

     

    The problem here is that the limiting factor in video is the monitor. A CCD is linear from its noise floor to ~97% of full well. This means that it has a very different transfer curve than film. If we consider a stop to be an increase or reduction in exposure of 2 to 1, then each 6 db of dynamic range is equal to 1 stop. That means that a CCD with 72 db of dynamic range is going to give us 12 stops. Looks great! The fact is the first stop is unusable because the noise is 50% of the signal. The second stop is unusable because the noise is 25% of the signal etc.. So the real number of useable stops depends on how much noise we can accept.


  14. Image Format- We should capitalize on the change in pixel count when we change formats by not transferring unused pixels that will be inherent in various formats. We should consider placing the additional horizontal output registers above and below the image area. This will let us reduce the vertical size of the CCD image area in a symmetric fashion. We can place charge drains at the edge of each final horizontal output register to allow us to dump unwanted lines quickly. When we do not require the information contained in a line, we shift the line through the output registers into the charge drain. The charge is drained into the substate of the chip and the wanted line follows the unwanted line into the register for clocking out. If we are cleaver in the placement of the registers and drains we can produce the required formats and a viewfinder mode that could be displayed on a standard monitor.
  Current Availability-


    1. The CCD Element- The CCD we need does not exist. So a survey of different manufactures must be made to determine the best path to take. From the data I have looked at so far the closest part to what we need is the Kodak Kaf-6300. I think we should talk to the following manufactures;

    2.  

       

      Dalsa EGG Reticon Fairchild

      Kodak Philips Thomson

      Each of these companies has the ability to make what we require. Several of these companies have existing product that could suit our needs if modified and all of them have the ability to build custom CCDs from scratch. Philips, Thompson, EGG, Panasonic, Sharp, T.I. and Kodak have been contacted concerning the design of custom chips. The process of making a custom CCD has become more reasonable over the last few years.

      We should expect that NRE would be less than $ 500k and the CCD would be ready in less than 6 months.

    3. Image Compression- Image compression is only one part of our problem. We are going to need an array of hardware data-flow processors. We have raw data coming out of the CCD at 150 to 300 megawords/S. The data is 12 bits and after we recover the color we have 3 planes of 3000 x 2000. So we have 27 megabytes per picture @

    4.  

       

      8 bits x 24 fps = 648 megabytes/second. This is without things like gama or image enhancement. To do all this in a reasonable time requires lots of dedicated hardware.

      Now we can consider compression. We should expect that if we can implement data flow hardware we will be able to use similar hardware for compression. These data rates are faster than the dedicated MPEG and motion JPEG chips that are currently available. Current experience tells us that in a still frame we should not exceed compression of 10 to 1 or we will begin to see noticeable artifacts. If we use MPEG we can gain another 3 or 4 times giving us ratio 30 or 40 to 1. The editors will prefer motion JPEG as it allows cutting at any frame, we will be lucky to get 20 to 1 with motion JPEG.

      Further investigation should be done with available hardware compression manufacturers such as:

      Ccube Divicon GI

      Sarnoff Zoran Analog Devices

      ETC.

    5. Digital Recording - More research is necessary to determine what might be considered "state of the art" or what may be currently available.
We will need some form of a parallel system of recorders. For example, from the previous discussion we had an MPEG data rate of 16.2 MB/s at 8 bits/byte. So we have a serial data rate of about 130 mbs. Currently the fastest data recorders were the Ampex data streamers at around 22 mbs. That means that we will need 6 data streamers in parallel to record the data. The newer concepts would include the use of RAID style disk arrays for real time and backup to Ampex DTS (Digital Tape Storage) in non-real time. This is currently both economical and practical.

Digital recording techniques will be covered more completely in another document.

This document is property of Greg's Sandbox and was authored by Gary Slayton and Greg McMurry. April, 1998©