Skip to content

Instantly share code, notes, and snippets.

@Myndex
Last active August 4, 2024 06:17
Show Gist options
  • Save Myndex/04dd7d3143806ad050bb946d667e889f to your computer and use it in GitHub Desktop.
Save Myndex/04dd7d3143806ad050bb946d667e889f to your computer and use it in GitHub Desktop.
Fast Integer Lightness: brintness

BRINTNESS

Fast integer lightness calculations from RGB — by Andrew Somers

brintness is an integer brightness/lightness/darkness calculation

This is part of an experiment in estimating a perceived brightness while remaining in integer math and using bitshifts to maximize performance (avoiding all subtraction, division, and square-root calculations).

The Issue

The traditional means to determine the perceived lightness or brightness for a given color value is to first normalize R, G, and B from 0-255 to 0.0-1.0, linearize the values via exponent or more exotic methods (we assume colors are in a gamma encoded color space, such as sRGB), and then after linearizing, creating a linear luminance value by applying coefficients to each of the R, G, B values, adding them, and then finally applying an exponent or more exotic math to find a predicted lightness value.

This is computationally expensive. And even then, we generally miss factors such as the HK effect, and the above as described does not consider the importance of context. In other words, we may say "this is the accurate way" and yet it still lacks in accuracy.

The Unbearable Lightness of Perception

So if the commonly accepted methods lack inherent accuracy do to disregarding certain factors, and given that RGB color spaces are often encoded with a gamma or transfer curve of some type, which while different than some lightness curves, still "in the ballpark" in terms of perception.

And let's not forget that the human vision system has its own built-in gain control that makes measuring lightness perception a frustrating task that is still a matter of emerging science.

How Fast Does Red Weigh?

Light in the world follows simple linear math. That is, if you have 100 photons of light, and triple it, you then have 300 photons of light. Human vision does not perceive light linearly however, a given change in light value will result in a larger or smaller change in perceived lightness, depending on a number of contextual factors.

And, light does not have a "color", as color is only a perception of our vision system. But light does have different wavelengths or frequencies, like musical notes on a piano for want of an analogy. But also, human vision is most sensitive to a very narrow range of "middle notes", the middle wavelengths we identify as green, with sensitivity rapidly dropping off for shorter (blue) or longer (red) wavelengths.

Note

So to model the mixing of light, we often want to be in a linear space, but when we want to predict how we see a color or lightness, we want to be in a space that is curved per our perception in the given context.

Among the implications is that each of the red, green, and blue primaries in our display are weighted differently based on an averaged visual sensitivity to each, so that #ffffff i.e. equal values of R,G & B is white or grey. Because these weights are being applied to light sources, they should ideally be applied in a linear space. If you apply spectral weighting to values that are gamma or TRC encoded, you'll get some errors, most noticeable in the middle ranges.

Never The Same Color: NTSC

With all of the above as some foundation, let's not forget that for decades, NTSC encoding for Luma ($Y^\prime$ i.e Y prime) applied the weighting to gamma encoded signals. And Luma is a gamma encoded achromatic signal which is what black and white televisions displayed.

Note

In the examples below, we assume $R, G, \& B$ are normalized to $0.0-1.0$

The common NTSC weights for Luma are $Y^\prime = R^\prime \times 0.299 + G^\prime \times 0.587 + B^\prime \times 0.114$ the fact that they are applied to gamma-encoded values is not that problematic as long as the decoding at the set uses the inverse transform. The image seen on black and white televisions however, while essentially compatible, does look a bit different when fed a Luma signal vs an actual black and white signal.

sRGB which uses different primaries uses the weights as applied to linearized values as $Y = R \times 0.2126 + G \times 0.7152 + B \times 0.0722 $ here $Y$ is linear Luminance, not gamma encoded Luma $Y^\prime$.

Note

And to be very clear, $Y^\prime \neq Y$.

The First Rule of Bright Club is...

Important

The point of this Gist is a method for calculating an achromatic lightness that is reasonably accurate to be useful, but computationally fast so that it is suitable for applications in realtime image analysis.

If the image being analyzed is in a gamma encoded space, and the gamma value is "close enough" to that of human lightness perception for the given case, then we can probably apply coefficients and sum without linearizing. The middle range of lightness/darkness and saturation will be the least accurate, while highest or lowest saturation or brightness will be the least affected by our "cheating" here, assuming we use the standard weightings for sRGB/Rec709.

$pseudoLightness = sR^\prime \times 0.2126 + sG^\prime \times 0.7152 + sB^\prime \times 0.0722 $

Though we might improve the middle range and bit at the expense of the high and low end, splitting the difference if you will by adjusting the weightings to spread the errors more evenly across the range as a compromise.

$pseudoLightness = sR^\prime \times 0.25 + sG^\prime \times 0.66 + sB^\prime \times 0.09 $ (experimental weighting)

Warning

CAVEAT: The following is beta, not fully tested yet, implementation depends on language, so below is pseudocode. Also, and I'll mention this often, these are not intended to be "accurate" lightness calculations, they are just intended to be FAST yet still reasonable...

Now, if we are working with 8 bit int values for each primary, so each is 0-255, but we'd like a lightness value that is 0-100, and language and/or hardware is fastest working with ints, then we might optimize for speed with:

    // r,g,b are 0-255, brintness is 0-100
int brintness = (r * 25 + g * 66 + b * 9 + 100) >> 8;

So here, the coefficients are ints being applied to the int color values, adding 100, and bit-shifting by 8 which is the same as dividing by 256. The result is an integer lightness value of 0-100.

Tip

The coefficients add up to 100, so the maximum value is 25500. If that is divided by 255 (or bit-shifted >> 8) we get a 0-99 range. Adding 100 before the bit shift gives us a 0-100 range.

Alternately, if we want to construct a B&W image or otherwise want to output a 0-255 range for brintness, then we can pre-multiply the coefficients relative to the size of the bit-shift. In this case we bit-shift >> 10 which is the same as dividing by 1024. Here, we took the weights from the previous example, multiplied by 10.24, then rounded or truncated back to ints so that the total of the weights is exactly 1024:

    // r,g,b are 0-255, brintness is 0-255
int brintness = (r * 256 + g * 676 + b * 92) >> 10;

Tip

And again, to be abundantly clear, these coefficients may not be useful nor accurate enough for any given application. We're cheating in the name of fewer cycles.

One more, this one may be the fastest, depending on hardware/language factors. Add two red, five green and one blue. The bit-shift of >> 3 is the same as divide by 8.:

    // r,g,b are 0-255, brintness is 0-255
int brintness = (r+r+g+g+g+g+g+b) >> 3;

This last one is essentially equivalent to:

$floatingLightness = sR^\prime \times 0.25 + sG^\prime \times 0.625 + sB^\prime \times 0.125 $

Notice the closeness to the traditional NTSC values of $\ 0.3 \ \ 0.59 \ \ 0.11 $
And kind of splitting the difference to sRGB: $\ 0.213 \ \ 0.715 \ \ 0.072 \ $ at least for red and green.

Which is close enough for some applications. Or we can add 4 red, 11 green, 1 blue and shift by 4:

    // r,g,b are 0-255, brintness is 0-255
int brintness = (r+r+r+r+g+g+g+g+g+g+g+g+g+g+g+b) >> 4;

This shifts the blue lower, green higher, so it's equivalent to:

$floatingLightness = sR^\prime \times 0.25 + sG^\prime \times 0.6875 + sB^\prime \times 0.0625 $

Which gets it closer to sRGB: $\ 0.213 \ \ 0.715 \ \ 0.072 \ $

More Tricky Bit Fiddling

As I think about some of the versions above, we can add bit shifts in the addition portion, and reduce the cycle count further.

    // r,g,b are 0-255, brintness is 0-255
int brintness = ((r << 1) + (g << 2) + g + b) >> 3;

Is essentially equivalent to:

$brintness = (sR^\prime \times 2 + sG^\prime \times 4 + sG^\prime + sB^\prime) / 8 $

Or

$brintness = (sR^\prime \times 2 + sG^\prime \times 5 + sB^\prime) * 0.125 $

Or

$floatingLightness = sR^\prime \times 0.25 + sG^\prime \times 0.625 + sB^\prime \times 0.125 $

And we can make some adjustments to the relative weights, as shown here:

    // r,g,b are 0-255, brintness is 0-255
int brintness = ((r << 1) + (g << 2) + g + (g >> 1) + (b >> 1)) >> 3;

Is essentially equivalent to:

$brintness = (sR^\prime \times 2 + sG^\prime \times 5.5 + sB^\prime * 0.5) * 0.125 $

Or

$floatingLightness = sR^\prime \times 0.25 + sG^\prime \times 0.6875 + sB^\prime \times 0.0625 $

Or we can weight the red greater, as in:

    // r,g,b are 0-255, brintness is 0-255
int brintness = ((r << 1) + (r >> 1) + (g << 2) + g + (b >> 1)) >> 3;

Which is essentially equivalent to:

$brintness = (sR^\prime \times 2.5 + sG^\prime \times 5 + sB^\prime * 0.5) * 0.125 $

Or

$floatingLightness = sR^\prime \times 0.3125 + sG^\prime \times 0.625 + sB^\prime \times 0.0625 $

LumINTance

Make a B-Line to Linear

The above assumed gamma-encoded color or image data, and creating a pseudoLightness in minimum CPU cycles.

But what if we need to be in linear light, not perceptual lightness? Some time ago, I presented "Andy's Down and Dirty Grayscale", which output a gamma encoded sRGB compatible grayscale from an RGB value.

             // ANDY'S DOWN AND DIRTY GRAYSCALE™
            // sR sG sB are 0-255 sRGB values. The ** replaces Math.pow and works with recent browsers.
           // For purists: Yea this is NOT the IEC piecewise, but it's fast and simple, hence 'down and dirty'

  let gray = Math.min(255,((sR/255.0)**2.2*0.2126+(sG/255.0)**2.2*0.7152+(sB/255.0)**2.2*0.0722)**0.4545*255); 

But if we strip off the conversion back to sRGB 0-255, we can be left with a linear luminance from the RGB value:

             // ANDY'S DOWN AND DIRTY LUMINANCE™ - Luminance in one line.
            // sR sG sB are 0-255 sRGB values. The ** replaces Math.pow and works with recent browsers.
           // For purists: Yea this is NOT the IEC piecewise, but it's fast and simple, hence 'down and dirty'

  let sY = (sR/255.0)**2.2*0.2126 + (sG/255.0)**2.2*0.7152 + (sB/255.0)**2.2*0.0722); 

Lumintance

Can we do some of what we were doing earlier to simplify? One of the problems here is that to linearize the encoded values, they need to be normalized such that 0-255 is mapped to 0.0-1.0, and by definition that eliminates ints. But we can reduce some of the more expensive math, for instance instead of dividing we could multiply by a recalculated $1/255$, and instead of raising to the power of 2.2, we could square by multiplying. And since all we are going to do is multiply and add, we can combine each coefficient with pre-calculation. And ultimately, we can actually avoid the normalize step.

Step 0: $(sR/255.0)^{2.2} \times 0.2126$
Step 1: $(sR \times 0.003921568627451)^{2.2} \times 0.2126$
Step 2: $sR \times 0.003921568627451 \times sR \times 0.003921568627451 \times 0.2126$
Step 3: $sR \times sR \times 0.0000153787005 \times 0.2126$
Step 4: $sR \times sR \times 0.000003269511726$

Note

In step 2 we make the leap from an exponent of 2.2, to 2.0

So then our multiply and add-only version is:

$sY = sR \times sR \times 0.000003269511726 + sG \times sG \times 0.000010998846597 + sB \times sB \times 0.000001110342176$

Warning

This is not a "technically correct" linear luminance, and can be too light in the midrange.

It is simply intended as a minimum, pre-optimized to "mostly" linearize sRGB values. The pre-multiplied coefficients shown are for sRGB or Rec709 only. As an example, using the correct way, rgb(128,128,128) returns 21.5, but with the following cheat method, rgb(128,128,128) returns 25.2 which is a 17% different, but with this error decreasing for higher or lower RGB values.

      // sY is 0.0-1.0
float sY = sR * sR * 0.000003269511726 + sG * sG * 0.000010998846597 + sB * sB * 0.000001110342176;

But the point of this Gist is to end up with an int between 0-255. Obviously we can multiply sY * 255 but the more elegant solution is to distribute and premultiply the coefficients. The following creates a semi-linearized luminance that can be truncated into an int 0-255 (lumintance):

      // sY is 0-255
let sY = sR * sR * 0.000833725490196 + sG * sG * 0.002804705882353 + sB * sB * 0.000283137254902;

The only difference between the 0.0-1.0 and the 0-255 version is:

  • 0.0-1.0: premultiply each sRGB coefficient by $\ 0.0000153787005 $
    • (Which is just $\ 0.003921568627451^2 \ $)
  • 0-255: premultiply each sRGB coefficient by $\ 0.003921568627451 $
    • Note however that this means that RGB values less than 13,13,13 will equal zero, 0-255 is not a good precision for linear color space!!

The reality is that 0-255 should not be used for linear. If staying in integer math, the minimum needed for linear is 12bit (0-4095), with 16bit or more preferred. Assuming we have 32bit registers at our disposal, then an efficient 12bit linear lumintance from sRGB is:

//// CONVERT 8bit sRGB to 12bit lumintance ////

       // sRGB values are 0-255
      // Coefficient sum is 258
     // sYint is 0-4095
let sYint = (sR * sR * 56 + sG * sG * 183 + sB * sB * 19) >> 12;

In this case we are using coefficients that sum to 258, so effectively the result for full value is $255*255*258 = 16,776,450$ (just shy of 24 bits), and then the bit shift of >> 12 which gives us a range of 0 - 4095.

rgb(0,0,0) = 0
...
rgb(3,3,3) = 0
rgb(4,4,4) = 1 
rgb(5,5,5) = 1
rgb(6,6,6) = 2
rgb(7,7,7) = 3
rgb(8,8,8) = 4  /* 0.097 instead of 0.24 */
...
rgb(17,17,17) = 18  /* 0.439 instead of 0.56 */
...
rgb(22,22,22) = 30  /* 0.73 instead of 0.80 */
...
rgb(25,25,25) #191919 = 39  /* 0.952 instead of 0.972 */
rgb(26,26,26) #1a1a1a = 42  /* 1.025 instead of 1.032 */
rgb(27,27,27) #1b1b1b = 45  /* --> 1.0989 instead of 1.096 <-- */
rgb(28,28,28) #1c1c1c = 49  /* 1.196 instead of 1.161 */
...
rgb(30,30,30) #1e1e1e = 56  /* 1.367 instead of 1.298 */
...
rgb(33,33,33) #212121 = 68  /* 1.66 instead of 1.52 */
...
rgb(119,119,119) #777777 = 891  /* 21.75 instead of 18.4  (∆3.35)  */
...
rgb(128,128,128) #808080 = 1032  /* 25.2 instead of 21.6  (∆3.6)  */
...
rgb(160,160,160) #a0a0a0 = 1612  /* 39.3 instead of 35.1  (∆4.2)  */
...
rgb(200,200,200) #c8c8c8 = 2519  /* 61.5 instead of 57.75  (∆3.75)  */
...
rgb(224,224,224) #e0e0e0 = 3160  /* 77.17 instead of 74.54  (∆2.63)  */
...
rgb(255,255,255) #ffffff = 4095  /* 100 */

In the examples above, we've compared the lumintance with standard luminance calculated from sRGB. We see that colors lower than #1b1b1b calculate darker, and calculate as a little lighter above that. The greatest error is near perceptual middle, in the area of #a0a0a0 where colors calculate about 10% lighter. Also, because this is quasi-linear, there are fewer code values per increment in dark colors. This demonstrates the value of gamma encoded data.

Why linear then? Linear is how light exists in the real-world, and we can improve the realism of renderings and image composites by calculating in a linear space. And typically we'll be doing this in a floating point environement. Nevertheless, we may have static elements (images) that are gamma encoded that need to be linearized. Very often there are many such elements, and the small errors noted are not significant for their purpose, so such an efficient linearization can be employed to linearize things such as textures, bump maps, etc.

Some computer vision methods assume working with linear image/color data—here again, speed and low energy may be more important than absolute accuracy when linearizing.

Whether or not the trade-off in accuracy versus speed is appropriate for your application of course is something that needs to be considered.

Linearized RGB

Taking the ideas from above, we can also create a linearized set of RGB tuples.

//// CONVERT 8bit sRGB to 12bit lumintance ////

function rgbLint (sR,sG,sB) {
       // sRGB values are 8bit 0-255
      // Linearized output values are 12bit 0-4095
  let lintR = (sR * sR * 129) >> 11;
  let lintG = (sG * sG * 129) >> 11;
  let lintB = (sB * sB * 129) >> 11;

  return (lintR,lintG,lintB)
}

This is equivalent to lintR = pow(sR, 2); That is, instead of 2.2 or 2.4, we're linearizing with an exponent of 2.0. And we're not concerned about the coefficients, as we want (4095,4095,4095) to be white. Still, we're promoting to 12 bit as linearized RGB needs at least 12bit per channel to be reasonably useful.

CAVEATS & CENTIPEDES

Caution

Danger Will Robinson! The values shown above are sure to cause anxiety amongst all who find color sacred, including myself. The point of this Gist relates to applications where fidelity to image data or true lightness values is a lower priority than speed of computation.

In particular this may apply to machine environments where remaining as integer math is important, such as in embedded or low power applications (think: motion detection or gain control in remote security cameras as an example).

But another place it can be useful is in real time user interfaces for color controls, here the "accuracy" is not as important as speed, provided there is an accurate model behind it that aligns on control release.

Not Contrast

Important

I also feel I should point out that the gamma or TRC used in most image encodings is not a useful way to find accurate contrast. While these high gamma values used in image processing may give pleasing images, when it comes to predicting contrast, and especially contrast of text and thin lines, the related lightness curves are essentially flatter for predicting contrast as a difference between encoded values.


Copyright © 2024 by Myndex. All Rights Reserved. Thank you for reading.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment