The Developer’s Cry

Yet another blog by a hobbyist programmer

OpenGL sRGB color space

Way back in the 1990s computer displays used to be CRTs (cathode ray tube). Not only was the screen resolution low—games typically played in 320x200, sometimes in the so-called “mode X” 320x240—the color reproduction was also notoriously bad. As an example, green on one display would not be quite the same green on another. And what about gamma; on some monitors the game Doom (1993) was so dark that people just couldn’t play it. Therefore the game included a gamma correction setting where you could boost the lightness.

When the World Wide Web happened this color reproduction problem became more apparent due to the use of photo images in web pages. Faithful color reproduction may not seem very important to the average business PC user, but for professionals and photo enthusiasts it is a huge problem if you scan or print and the colors don’t come out right. In 1996 HP and Microsoft proposed a new standard color space, to be used in all hardware and software, namely sRGB. This standard color profile was to be used in operating systems, images, displays, cameras, scanners, printers and, explicitly mentioned, across the internet (meaning: in web browsers). Anything dealing with color has to use the same color profile in order to get faithful color reproduction. Although today we have moved on to much better displays, we still reap the benefits from this proposal. It’s nice to see as an historical fact just how much this was driven by the internet.

The gamut is typically depicted as a triangle in a color space diagram. These diagrams are a little tricky to interpret because you typically view them in sRGB, whilst they represent a much larger color space. It gives a good indication however of what colors are in a particular space, and what a particular device might be able to display.

If you save a photo in AdobeRGB and then display it as sRGB, the colors will look bleak and washed out. This is because sRGB is incapable of displaying all those colors. Yet if you save the original in sRGB, in general it will look perfectly fine with strong colors. sRGB does have a limited color space though, and the human eye easily discerns bands in smooth gradients.

In OpenGL we render images to the screen. We can make color reproduction faithful by simply forcing everything to sRGB. When loading the texture you have something like this:

glGenTextures(1, &tex_id);
glBindTexture(GL_TEXTURE_2D, tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img.w, img.h, 0, img.format, GL_UNSIGNED_BYTE, img.pixels);

Note the GL_RGBA value in there, that’s the internalformat parameter, which specifies what format the loaded texture will be in. For sRGB we change the internalformat to GL_SRGB8_ALPHA8:

glTexImage2D(GL_TEXTURE_2D, 0, GL_SRGB8_ALPHA8, img.w, img.h, 0, img.format, GL_UNSIGNED_BYTE, img.pixels);

There is a second format parameter here which says what format the stored image is in. To get good results (no internal conversions) we should store the image in sRGB format beforehand in our drawing program—it’s often named “Save for Web” if it isn’t already the default. The image loading code must set img.format to match GL_SRGB8_ALPHA8 however. In my PNG loader I explicitly put:

switch(png_get_color_type(png, info)) {
    case PNG_COLOR_TYPE_RGB:
        format = GL_SRGB;
        break;

    case PNG_COLOR_TYPE_RGBA:
        format = GL_SRGB8_ALPHA8;
        break;

    ...
}

Again, this code does not magically convert to sRGB. The image must already have been saved in sRGB format. Also note that this code simply ignores any special color profile that might be present in the PNG. (Which is a little odd in itself, because PNG is a format for the web, basically meaning that it must use sRGB, but I digress).

Rendering goes in the same way as always; you specify vertices and let OpenGL do the texturing. The framebuffer must be in sRGB format:

glEnable(GL_FRAMEBUFFER_SRGB);

And that is all there is to it. I can’t say that I really noticed any difference. It may well be that sRGB already was the default, that’s the whole point of having a standard. I have a retro-style game where there aren’t that many colors to begin with, so it’s hard to tell.

Modern displays support wider gamut, for example the DCI P3 standard. A HDR photo of the sun will nearly burn your eyes out. The sRGB standard is old. Since sRGB is the standard for the web, we are kind of stuck with it, unless each photo carries its own color profile—and they often do. I suppose that it’s not that important for indie gamedev, but any developer working with artists or photo professionals should know a bit about color management.

Interesting reads: