How does SLS convert 24 or 32-bit color image to 16-bit?

When converting an RGB888 image to RGB565, there are two methods used.
One simply shifts the unused bits of the color lane off. For example, for Red, there are only 5 bits, so the least significant 3 bits are shifted off into oblivion.
Red8>>3 = Red5

The proper and more accurate method is as follows:
(Red8/255)*31 = Red5 (5bits gives 32 color levels)

I’m not sure if you can really see the difference, but curious which method SLS uses