Blue versus Green

Posted by on Apr 5, 2011 in Blog, Inspiration, Technology | 2 comments

Blue versus Green

I attended an event at the DGA on March 12th comparing the visual effects used in the original Tron with those of the recent Tron:Legacy. There was a lot of great information about the state of the industry, work flows, etc. All good. But one comment made by the VXF supervisor of Tron:Legacy (I think it was Eric Barba at Digital Domain) stuck in my head. He mentioned that he “convinced” the director Joseph Kosinsky to shoot on blue instead of green screen because he felt it was more flattering to skin tones. Interesting. When I mentioned this to a DP friend of mine, he said that he understood skin tone has way more blue in it than green. Therefore, eliminating green by keying it out will lose less information. That didn’t sound quite right to me, so I did a little photoshop test: That certainly seems to indicate that there is less blue than green in skin tones. And with a quick glance at the color wheel, it’s easy to see why. Blue is in fact a complement to orange (which is to say opposite) and thus only factors in to increase value and saturation. So next time we shoot blue screen. Not so fast – that is really only one of the factors to consider when picking screen color. Putting aside clothing (blue jeans) and eye-color concerns (not really a big deal anymore – easily solved with a junk matte), one ought to consider the Bayer filter issue. Without getting too far into CMOS chip technology issues, suffice it to say that the 2 dominant CMOS cameras, Red and the Alexa, are more sensitive to green than blue light – so there will be less noise to deal with in the key. (But with the new MX chip and the Alexa, they are so low noise I’m not sure that should be of primary concern anymore… practical experience necessary here – I’ll report back.) So maybe if I was shooting with a Genesis/F23 or F30 (which use rgb and luminance stripes instead of a Bayer pattern and thus doesn’tt favor green), I’d shoot blue. Aha: Tron:Legacy shot with the Pace Fusion system with Sony F35 cameras. So what other issues are at play? Color suppression? Well this is, I think, where we see shooting people on blue screen has some merit. When you color suppress green out of an image, you use its complement – magenta. Now, when a pixel is fully green and you add magenta, it becomes neutral. But in the dynamic texture of human skin, right next to that green pixel is a more neutral pixel and that pixel turns magenta – not a very flattering color in human skin tone. So when suppressing the green spill off the face, you have to pick your poison – get out more green but have some magenta or leave some green. (A common approach to the magenta spill is to desaturate the image after you’ve calculated the key – which is a look we got used to in vfx blockbusters back in the 90s.) In the case of blue spill, orange is the complement – a much more natural color for human skin tone. So why did we leave blue behind if it is more flattering to human skin tone? Well, blue screen was used in traditional film matting techniques because panchromatic film is more sensitive to blue light, thus providing a cleaner pullout matte. But that was film. Video and Digital cameras have traditionally been least sensitive to blue light – those of us who...

Read More

Contour Camera

Posted by on Feb 13, 2011 in Blog, Inspiration, My Work, Technology | 0 comments

Contour Camera

We’ve been shooting a lot with the new Contour cameras – much like the Go Pro though with a more sports-friendly form factor – and this image gets to the heart of the matter. We mounted, or rather Joe Case our Second Unit DP actually mounted the camera onto the bat itself. Cool imagery, but the compression artifacts are from the footage – not jpeg compression in the posted image. My response to the shot is, “Great, now let’s reshoot it with a real camera.” Ok, but how? A DSLR might look better, but attaching it to the end of a baseball bat is probably not feasible – I’d be very worried that anemic 1/4 20 thread on the bottom of the camera would just rip out when the bat swings (and yes, we shot that). You could use an SI-2K, but then you’d be tethered. And frankly if you take a look at an SI-2K rig Guy Ritchie used for a recent Nike spot, it isn’t exactly a comparable solution to the Contour. More on that here: Guy Ritchie NIke Spot The answer might be this new industrial camera from Allied Vision Technologies It uses the same Kodak KAI-02150 CCD chip as the Ikonoskop ACam dII and because it is a CDD – that means no jello wobble – which is completely terrible on these little sports cameras. It’s on my to shoot...

Read More

Camera color test

Posted by on Nov 21, 2010 in Blog, My Work, Photography, Technology | 0 comments

Camera color test

So from left to right, Canon 5D, Canon 7D and Red One M-X. The question is can you intercut the footage. Here was a little piece of recent footage. You can tell the 5D and 7D are challenged to handle the dynamic range in the image and if you zoom in, you can clearly see the h.264 compression artifacting. That said, the color is pretty close after a little tire kicking from Colorista II. Our current workflow is to dial in superflat settings on the Canons (see Stu’s ProLost Blog for details) and meter the color temperature – then dial it in on all 3 cameras. So far so groovy. Auto WB to white card not so much – word to the wise. The resolution of the Canons is about a sixth of the Red – but for this we’re downres’ing to SD so no worries. But if I was delivering HD, I wouldn’t be...

Read More

Side light study

Posted by on Nov 21, 2010 in Blog, My Work, Photography, Technology | 0 comments

Side light study

The top one I lit in Reno, NV – 6k book light through light grid with an ambient bounce card for fill. I wonder if they had a special to get into Sydney’s left eye… And I don’t care how you light Malcolm McDowell – he’s just cool.

Read More

To Grain or Not to Grain

Posted by on Oct 29, 2010 in Blog, Music, My Work, Technology | 0 comments

To Grain or Not to Grain

That is the question. I’m considering adding grain to the music video I’m working on for Rich Ferguson. My first step was the standard After Effects Add Grain filter. I chose 5279 as a starting place, because that was a well-known stock – fast and pretty. Well, I don’t know what the people at Adobe were smoking, but this preset looks nothing like film grain. Compare the clean image of a comp from the video with the next image which is the preset. Look at the random colored grain. Ok, fine. But then take a look at a still frame from our soon-to-be-Ex Governor in Predator (ok, not shot on Vision 500, but you get the point). Firstly, there is very little color variation in the grain structure. I tried to find some custom settings to match this look and it seems pretty close. Happy. However, anybody who has looked at color film grain very, very closely will know that there are these rare and crazy, outlier grains that are fully saturated in the r, g, or b ink. There is no way to dial that kind of random color noise into this filter. Bad on them. Now there is another interesting problem about making this digital, noiseless stuff look like film: what I’ll call random pattern resolution. Wha-at? An image on film is captured by molecules made opaque by silver, right? Where they land on the film surface is random – organic if you will. In the case of a CMOS chip (like the Red or Alexa or 5D) it is a grid – a regularly patterned grid. So there is no variation in the placement of the pixels in the image. In the case of film, the placement of these ‘pixels’ changes every frame. This gives film one advantage and creates one side effect (that I can think of). The advantage is that the perceived resolution of film is much higher than the real resolution of film (meaning tested in a still frame), because the human mega-mind averages together many frames of random pixel placement together to form a composite frame. The side effect is that there will be no straight lines on a film frame – because the silver halides are randomly placed on the film. Playing at 24 frames per second a line will be perceived as straight, but it isn’t. With that said, take a look at the custom image of the video. The settings approximate the natural film grain, but the lens flare created a perfectly straight line. It seems to me that Adobe ought to add a sou├žon of distortion to the image. And, of course, if they were serious about the whole affair, they would add that distortion randomly to each layer of the film grain: r, g and b – and then re-average them together to create the properly dynamic image. Now, if you are following the logic, that’s going to mean up-sampling the image (because let’s face it, nature operates on greater grid-detail than 1920×1080 or whatever – AKA infinite) to get some sassy sub-pixel interpolation, and then down-sampling it again. Look for the iphone app soon – meantime, I suppose I could up-res and separate the rgb channels, add a different noise turbulent displacement filter to each layer, recombine and down-res. But I won’t because I would never finish the video. My fellow digital cinema geek and frequent Reduser contributor Dan Hudgins called foul and pointed out that even film grain is an anachronism – noise is the new grain. And noise IS single pixel width so the...

Read More