All About Color Decoding
Color decoding is a technical term referring to the process whereby uncompressed RGB color information is encoded into compressed YPbPr (analog) or YCbCr (digital) format and then subsequently decoded back into RGB for display. The compression of color information involved is called chroma subsampling and is done to reduce data bandwidth in the chroma channel. There are various subsampling rates, but the most common is 4:2:0, which is used in Blu-ray and the MPEG codec for DVD.
There are two common color decoding specifications, Rec. 601 and Rec. 709. Let's spend a little time discussing each because the differences between them are interesting. Rec. 601 is used for standard definition programming, including DVD. The approach uses a formula that relies on luma coefficients that were derived from a now long-obsolete gamut specification adopted in 1953 by NTSC for the then-budding color television industry. That gamut was:
From this gamut you can mathematically derive the RGB luminance values of R 0.2990, G 0.5864, B 0.1146, and these values are used for the RGB-YPbPr-RGB encoding/decoding.
However, as time progressed and the desire for brighter displays emerged (early TVs were quite dim), the gamut was gradually restricted to accommodate this need. Through the 1960s this led to a lot of confusion and conflicting standards. Finally, in 1979, SMPTE (Society of Motion Picture and Television Engineers) settled on the SMPTE 170M standard. This offered a much restricted gamut, which allowed for brighter images, and a new white point that is known as D65. The gamut is called SMPTE-C.
From this gamut you can mathematically derive the RGB luminance values of R 0.2124, G 0.7011, B 0.0866. However, and this is the odd part of this story, SMPTE decided to stick with the 1953 luma coefficients for use in color decoding for displays with SMPTE-C primaries. This was probably done to ensure backwards compatibility, and in practice it had no substantial effect on the image. Thus, modern SD programming (broadcast and DVD) uses the SMPTE-C gamut but the Rec. 601 color decoding specification, which is not based on a modern gamut, but on the 1953 NTSC standard.
In the late 1990s when we began moving towards high definition displays, SMPTE recommended another standard, SMPTE 240M, which again used the SMPTE-C gamut. However, it also used the theoretically correct luma coefficients for color decoding of R 0.2124, G 0.7011, B 0.0866. SMPTE 240M was an interim specification that was quickly replaced by the Rec. 709 high-definition standard that we use today. Rec. 709 is defined by the following gamut.
From this gamut you can mathematically derive the RGB luminance values of R 0.2126, G 0.7152, B 0.0722. Interestingly, this time it was agreed (as with SMPTE 240M) that Rec. 709 should use the theoretically correct luma coefficients for color decoding.
To complicate matters even more, EBU (European Broadcast Union) settled on a gamut for their PAL broadcast system that is only slightly different from Rec. 709. However, EBU continued with the 1953 luma coefficients.
Of course, the problem with all of this is that—just considering North America—there are at three sets of luma coefficients (Rec. 601, SMPTE 240M, and Rec. 709) and two gamuts that are currently in use (SMPTE-C and Rec. 709). It is vitally important that a display device is able to apply the same color decoding standard as was used for the source material during the encoding phase. This generally means that displays receiving signals from SD material should use Rec. 601 and when receiving signals from an HD source should use Rec. 709. Unfortunately, sometimes the manufacturers get this wrong and color decoding errors occur. For example, if HD source material was encoded with Rec. 709 but the display incorrectly uses Rec. 601 for decoding, then severe color errors will arise, especially in green and cyan.
That's the technical background for color decoding. However, confusion arises regarding terminology, and sometimes "color decoding" is used more informally simply in the context of describing the types of visible errors in color reproduction that can occur when there are mistakes in the encode/decode process described above and the controls that displays use to correct them.
Because the bulk of color decoding errors appear as errors in either the hues of the secondary colors or the brightness of the primary colors, sometimes the phrase "color decoding error" is used loosely to describe these types of inaccuracies. Saturation for all colors and the primary color hues are only slightly affected by color decoding errors. In fact, the tint and color controls found on all NTSC displays are most effective for adjusting precisely the types of errors poor color decoding creates. Thus, color and tint can be thought of as color decoding controls, despite the fact that they play no role in the encode/decode process described above.
Unfortunately, the Color and Tint controls found on commercial displays are blunt instruments that do not really address the problems created by color decoding errors. The biggest problem is that these controls affect all of the colors more or less equally, whereas color decoding errors affect the colors in a variety of ways and thus require individual attention.
If you are interested in exploring this further, Wikipedia has good pages on chroma subsampling and YCbCr. Also, I have created a spreadsheet that shows exactly the type of errors to expect from improper color decoding.