Just wondering. Do image processors like Capture One or Lightroom etc… use floating point for their internal maths?

I got to asking this when I was having trouble making a nice print of this portrait. Pushing contrast and graded filters to the end stops seems to make the image quite sensitive to digital rounding errors. Printing it using the standard (8-bit) printer driver resulted in awful banding in the greys. Luckily my Canon printer comes with an “XPS” driver that supports 16-bit images, and this made a nice clean print from a 16-bit TIFF.

Would such quantisation errors maybe cause problems during processing, too? Would this explain a few weird things we see from time to time? (Like a green ring around the sun that I could NEVER get rid of? Threw that one away in disgust…). Would floating-point arithmetic help?


(Bert De Colvenaer – Executive Director of ECSEL Joint Undertaking. Image (c) me).