Color Space 101

From Wikinico

Jump to: navigation, search

The aim of this page is to give to artists the minimum understanding required with color space manipulation.

It is aimed to stay simple, not to be fully accurate.


Without color management, depending on the information stored in the image, you would see:

Image:NG Kodak lin 2.jpgImage:NG Kodak srgb 2.jpgImage:NG Kodak log 2.jpg


Linear color space = physical color space

Physics provides headaches but it also provides laws to understand the behavior of light.

Formulas used by engineers to manipulate images are coming from physics. So they are designed to work well in the physical color space (measuring light). In the physical color space light laws behave following linear laws (linear is part of math vocabulary, it roughly means "proportional"), so the physical color space is most often called linear color space.

The formulas work not so bad in others color spaces so many people don't understand the interest to invest in mastering use of linear color space.

The interest is to get best results.

How to recognize if an image is stored in linear ?

An image stored in linear will appear TOO DARK if you display it in a software without color management.

So 95% of the time you'll see an image obviously too dark (without color management), you can assume it is stored in linear.

Image:NG Kodak lin 2.jpg

Format designed to use linear color space by default

Open EXR (.exr)

But it does not mean every Open EXR is storing linear data.

If you work in linear

Don't work in 8 bits depth per channel. It would give really poor results (I don't want to explain why here because I want this page to stay simple).


Once understood, you can get better results in lighting and compositing.


You must use a software with color management to see properly your images.

You must unlearn the parameters of your "classic lights" when using "physical lights".

Physical lights / physical shaders

  • If formulas are designed to work in linear space, why "classic parameters" of lights don't work ?

Because of the speed of computers and the price of storage at the beginning of CG.

Computers were slow and storage was expensive so it was not possible to use the equations without big approxiations and big sacrifices "hacking" these equations (ex: specular and reflections) and it was not possible to use linear values because they require more than 8 bits per channel to store interesting values relative to human vision. The habit of using hacks lead to disconnections beetween some techniques and the physical world (By the way, most of these hacks were needed, better to have something not perfect than nothing). But the main idea behind most of these formulas is coming from physics. Now faster computers and cheaper storage allow techniques to go back closer to the reality (ex: global illumation).


Log color space = film scan color space

How to recognize if an image is stored in log ?

An image stored in log will appear TOO WASHED if you display it in a software without color management.

So 95% of the time you'll see an image obviously too washed (without color management), you can assume it is stored in log.

Image:NG Kodak log 2.jpg

Formats designed to use log color space by default

Cineon (.cin or number padded with 4 digit without extension)
Dpx (.dpx)

If you work in log

If you're manipulating images in log without converting them first to linear (or sometimes to sRGB), you're probably doing a mistake or you work on a dedicated software/hardware able to handle natively log plates.

If it is the second option you probably don't need to read this page so I'll go for the first option : mistake.

Proper use of log images

Log images are images coming from film scanners. They are designed to carry as much information possible about color. Don't see them as directly usable images but more as "archives" (like zip or tar.gz files). Once converted in linear (equivalent to unzipped), you can work with the images. This conversion is often done on the fly in compositing softwares.

Once the work finished you need to reconvert it in log to be able to shoot them on film.


sRGB color space = regular color space before I knew about color spaces

How to recognize if an image is stored in srgb ?

An image stored in sRGB will appear CORRECTLY if you display it in a software without color management.

So 95% of the time you'll see an image correctly (without color management), you can assume it is stored in srgb.

Image:NG Kodak srgb 2.jpg

Format designed to use sRGB color space by default

Jpeg (.jpg or .jpeg)

What is inside sRGB ?

sRGB is designed to enable straight display from the values in the image buffer of your video card to an average monitor, without color management. This way it avoids settings for the user and avoid on the fly computations for the machine (so it's faster on slow machines (not really important now but it was when it was created)). The other important reason is it's an efficient way to get a lot of visually different colors with the same storage size (= it looks better to display a sRGB image than a linear image with color management when you only have 8 bits per channel).

sRGB is the regular joe color space in the internet world. For your information, if you have already heard about Rec709, that's the color space used for HDTV. It is roughly similar to sRGB in principle, with a different gamma.


Nothing to understand. Some people already applied formula on the pixels to make them directly look good inside a program without color management (= pretty all the mass market programs and many professional programs as well a few years ago) (for instance your camera includes processing of the captured data to create a jpeg). Easy for the brain.


  • Why do I have to add often color grading to compensate some crap blending ? Why my lights are hard to adust and change quickly from being too dark or too bright ?

Because the equations to do these computations are NOT designed for sRGB color space.

Why can we work in sRGB if equations are designed for linear color space ?

Because even if they are differents, there have some relationship. So these equations behave less nicely but similarly enough to achieve most of your expectations.

But the main reason is : we judge the quality of the work by eye (a subjective criteria).

So we compensate parameters values by eye and we don't care about the parameters values, we care about the final image.

If we were doing the same tricks to compute the amount energy received by a patient with a laser during an operation, we would kill people !!!


linear to sRGB

A gamma of 2.2 is a not so bad approximation to convert a linear image to a sRGB image:

Image:NG Kodak lin 2.jpg --> GAMMA( 2.2 ) --> Image:NG Kodak srgb 2.jpg

sRGB to linear

A gamma of 1/2.2:

Image:NG Kodak srgb 2.jpg --> GAMMA( 1 / 2.2 ) --> Image:NG Kodak lin 2.jpg

log to linear

In shake, use a LogLin node with parameter conversion set to "log to lin".

Image:NG Kodak log 2.jpg --> LOGLIN( conversion = "log to lin" ) --> Image:NG Kodak lin 2.jpg

linear to log

In shake, use a LogLin node with parameter conversion set to "lin to log".

Image:NG Kodak lin 2.jpg --> LOGLIN( conversion = "lin to log" ) --> Image:NG Kodak log 2.jpg


Linear workflow, need of color management

If you display straight forward an image buffer stored in linear color space, you will seea an image too dark. So if you work with linear images, you need a system in the software to convert on the fly linear images to sRGB images for display only.

If your in shake you could have an extra gamma node set to (1/2.2) to do that. You could attach it each time after the node you want to view an see the gamma node in the viewer. Of course it would be time consuming and probably error prone and very annoying!!!

To avoid that, imagine this gamma node is hidden by the interface, and automatically linked beetween you current displayed node and the viewer.

There is such a mechanism in several software, including Shake and now XSI (not in Maya at the time I'm writing).

It's often called LUT or viewer LUT.

In shake, it is the VLUT menu at the bottom left of the viewer window.

What is LUT ?

LUT means : Look Up Table.

It is an array of precomputed values to mimic a mathematical function.

In 8 bits for instance, you get 256 values to compute to define a function taking a 8 bits argument.

Once computed you can look inside this array using your 8 bit value as an index to get the transformed result.

What the link beetween my gamma and my LUT ?

As a regular function, you can precompute you gamma values in a LUT to mimic a gamma function.

The mechanism of LUT enable the use of several conversion functions. Some people may decide to do conversion with gamma, others will use more sophisticated transfer functions (ex: Truelight conversions).

There is two kinds of LUT widely used in the industry LUT 1D and LUT 3D.


A LUT 1D means : one dimensionnal array

It is used to precompute a function with ONE INPUT. So by using three LUT 1D, you can precompute simple grading were each ouput channel is depending only on ONE input channel.

R input -> R output
G input -> G output
B input -> B output
  • 8 bits to 8 bits LUT 1D

One LUT 1D takes 256 bytes of memory.

One operation using three LUT 1D needs precomputed arrays of total size: 3 * 256 = 768 bytes = 0.75 K

  • 16 bits to 16 bits LUT 1D

One LUT 1D takes 65536 * 2 bytes = 131072 bytes = 128 K

(There is a factor of 2 because 16 bits is 2 bytes).

One operation using three LUT 1D tneeds precomputed arrays of total size: 3 * 128 K = 384 K = 0.375 M

Some examples from shake nodes possibly using LUT 1D:


LUT 3D (=cube)

Sometimes grading or conversion are more complicated and each output channels can't be defined by knowing only one input channel.

In this case you can still precompute your grading or conversion but it is more costly.

You need to sample your color cube as if it was a volume (think about sampling densities in a fluid cube).

At each position in your sampled cube you associate a color. The position is the input color, the color associated to the position is the output color.

Finaly, when you process your image, you find which color is associated in to your input color in your sampled volume (which has the form of a cube).

Because you compute the whole color, you need one LUT 3D for the whole image operation you're doing.

(R,G,B) input -> (R, G, B) output
  • 8 bits to 8 bits LUT 3D

One LUT 3D takes 256 * 256 * 256 * 3 Bytes = 50331648 Bytes = 49152 KB = 48 MB (it's multiplied by 3 because in a LUT 3D you compute the resulting RGB channels at once)

One operation in 8 bits using one LUT 3D needs a precomputed array of total size: 48 MB

  • 16 bits to 16 bits LUT 1D

One LUT 3D takes (65536 * 2) * (65536 * 2) * (65536 * 2) * 3 Bytes = 6291456 GB = 6144 Tera Bytes !!!!!!

As you can guess it is really too huge. So if you're doing a LUT 3D for 16 bits images you need to down sample your 16 bits color cube and use interpolation on the fly to get the missing samples.

As you can see, one LUT 3D is more powerfull but more expensive than three LUT 1D.

Some examples from shake nodes possibly using LUT 3D:

Keyers (think as your resulting alpha as a RGB gray image, same maths behind)

Questions? / Answers?

Feel free to send me a mail at

Personal tools