Stereoscopic broadcasting throws-up a number of technology challenges for production and transmission. These are neither unprecedented nor insurmountable, writes Amberfin’s Mark Horton.
Both colour and sound had false starts in the cinema; it took a series of breakthrough movies, like The Jazz Singer (sound) and Wizard of Oz (colour) to gain public acceptance for the two technologies.
In 1928, the head of United Artists claimed that: “talking pictures will never replace the silent drama” and Russian film genius Sergei Eisenstein called sound “rotten trash”. Later, similar arguments were made against colour, especially on the grounds of cost. The experts at Fortune magazine said, as late as 1934: “whether colour can make black and white pictures as obsolete as sound made silent pictures, is quite another question”.
But ultimately, the movie business is just that: a business. Both sound and colour took some initial investment – and indeed a leap of faith – on the part of the studios, but both turned into big revenue generators for the people who backed them. Today the same conversation exists around Stereoscopic films (typically shortened to s3D) – as the first of the big s3D blockbusters start to make big money for their makers, the discussion is moving to the next step: s3D broadcasting.
However, s3D broadcasting is still in its infancy, led by a few early pilot programmes and channels. Recent consumer research shows that, if done well, s3D broadcasting could grow into a significant new choice for viewers at home.
Like its cousin in the cinema, it could make big money. But it’s important to remember that 3D cinema has a source of revenue now – the paying public spending money at cinemas. It’s not clear how soon (or how much) cash will come from the public for s3D broadcasting. So, unless you’re lucky enough to have a consumer electronics underwriter, for the near term, s3D cinema remains net cash positive and s3D broadcast remains net cash negative.
The issue for most broadcasters is how and when to plan for s3D. There’s an industry-wide learning curve and just like when the talkies or colour came to cinema, there are objections, debates and misunderstandings about s3D broadcasting. For example, many journalistic articles use images of anaglyph glasses, the legacy red/blue system that is not seriously discussed for modern s3D broadcasting.
Today’s s3D broadcasting plans almost all involve the use of linear or circular polarised glasses, or active shutter glasses. These systems produce far better subjective quality than anaglyph.
Stereo3D televisions coming onto the market now, give a good stereoscopic image and they work well with conventional 2D SD and HD content too.
Empirical evidence in early s3D broadcast trials clearly shows that whichever system is used, there is a pressing requirement for high levels of image quality, which are at the very top end of what is currently seen today by home TV viewers.
If we are collectively going to make money out of s3D broadcasting, we need to produce good pictures for the consumer to watch and we need to do that cost effectively.
s3D broadcast works by horizontal disparity. Stereo3D content is created by shooting or rendering (or ‘dimensionalising’) two eye views with a horizontal parallax offset, giving a left eye and a right eye view.
Any other disparity between images, like colour differences, blocking artefacts, timing offsets, differences in sharpness, geometric differences etc. causes discomfort (and eventually headaches and nausea). So at each point in the workflow chain, we need to get things right.
In shooting s3D broadcast, typically a pair of matched HD cameras, with matched lenses, mounted on an s3D rig, is used to capture the image. The horizontal offset produces a binocular disparity. That binocular disparity, together with other information in a scene, especially the relative size of objects, occlusion and relative motion, creates depth perception.
No special cameras are needed, though great care is needed to colour match and geometrically align the cameras.
Today, given the right tools, post production of stereo 3D content is also becoming simpler, quicker and cheaper than ever before. Any last minute quality ‘fixes’ can be applied and assuming that the camera rushes are well shot, with a little specialist knowledge, content can be produced that is comfortable to watch.
More and more cameras produce files rather than videotapes and more and more broadcasting today is file-based.
Transmitting two non-compressed baseband 1920x1080 4:2:2 Stereo3D HD signals, throughout every stage of broadcast production, post-production and delivery looks commercially impractical for some broadcasters, as it would use up significant bandwidth, burn up disc space and would risk the two signals picking up unwanted image artefacts along the way.
Losing sync is another issue; however, there are several image format schemes that aim to reduce the amount of bandwidth and reduce the risk of loss of sync. These schemes include ‘Side By Side’, ‘Side by Side with Enhancement’, ‘Checker Board’ and many kinds of posited ways of achieving Stereo3D specific compression.
For example, in side-by-side, or over under or checkerboard, a single signal is created that combines the left and right eye into a single image frame. This is then expanded out by the viewing device. Because we make stereo in our brains by some very complex method, despite the reduction in resolution, pictures still look good.
Stereo3D compression includes a variety of possible alternative schemes that take advantage of the similarity between the eyes to intelligently send only the required data.
All these different schemes that support 3D transport over the existing HD distribution infrastructure are currently being debated.
The Society of Motion Picture and Television Engineers (SMPTE) recently set up a task force to examine 3D to the home, and this very useful initial report is available from SMPTE but it’s likely that the race to bring s3D to the public will be a marathon, not a sprint.
In the future, the public may be viewing s3d content in widely different contexts. From a single individual watching content via a games console to large audiences watching content in a cinema or in a stadium.
In the end, long-term commercial pressures will dictate which image formatting schemes triumph, in which applications.
Anyone thinking about making an s3D investment needs to make sure they are future-proofed.
There’s already a running debate around topics ranging from s3D acquisition, editing, compositing, delivery and display and this potential for a lack of an image file format standards is leading to some confusion in the industry. The good news is that some areas may prove to be easier to gain agreement on.
For any vendor offering s3D in file-based workflows, especially if there is transcoding or rewrapping involved, it’s extremely important to work at the correct level of quality, at an acceptable data payload and to make sure that the images are correct. s3D in a file-based workflow needs definition, industry agreement and guidance and the most likely container technology that will be used is MXF.
MXF is both a toolkit and a range of existing container formats, for doing the ‘heavy lifting’ in file-based workflows. MXF is increasingly the wrapper format of choice for broadcasters moving away from baseband video and towards file-based workflows. AmberFin, led by CTO Bruce Devlin (one of the original team behind MXF) is currently working with the SMPTE to incorporate s3D good practice in MXF.
Using MXF as the container format for s3D has many potential advantages. First, MXF is image agnostic – it does not specify which image format should be used. For example, it could just as easily carry a side-by-side image as over under. Secondly it is also flexible in which compression scheme it carries; it could carry s3D images compressed as JPEG 2000, H.264 or other high quality codecs. Finally, it is an open, non-proprietary standard made by the industry, for the industry and overseen by AMWA – a non-profit making body that includes end-users and vendors. This means it can adapt to the future s3D needs of the industry.
The MXF work AmberFin is doing as part of the broader SMPTE s3D TV initiative, will help knit together file-based workflow and s3D in a way that can benefit everyone.
Forecasting the future is a contact sport; if you’re lucky you only come away with minor damage. We do know one big investment driver in the industry right now is the move from tape-based to file-based working. We don’t know what the s3D broadcast adoption curve is going to look like but the lessons learnt from history indicate customers and vendors will need to collaborate in order to work out ways to move forward that offer the right levels of image quality. We also know a robust file-based s3D workflow is going to be needed, to help make s3D broadcasting practical and profitable.
3D channels worldwide by end of 2010
Al Jazeera Sport 3D, Qatar
BS 11, Japan
MSG 3D*, US
Canal+ 3D*, France
Fox Sports, Australia
Sky 3D, UK
SKY 3D, South Korea
*Final station name TBA