For grayscale, only about 100 shades can be distinguished while typical monitors can display 256 shades (8 bit).
Twenty four bit color is generated on a computer monitor by having 3 colored pixels (Red, Green and Blue) displaying up to 256 shades of brightness each (8 bits). Three colors x 8 bits per color = 24 bit color.
A typical digital camera uses 12 - 14 bit sensors, one for each color. This yields 36 - 42 bit color images. Why is this helpful? Even though the eye cannot see all the colors, a computer can still process them. By saving the additional information (i.e. saving the image in RAW format, instead of a compressed format like JPEG), the programs can rescale and process the images with much lower levels of artifacts. e.g. stretching the contrast can introduce bands in the colors called posturization.
HDR) photography. In this computational photographic method multiple images of the same scene are combined to maximize the visibility of all areas of the photo. This can result in some very striking images but often only after a significant amount of manual tweaking to get just the right effect. Here-in lies the value.
"Existing tools are therefore likely to improve significantly; there is not currently, and may never be, an automated single-step process which converts all HDR images into those which look pleasing on screen, or in a print. Good HDR conversions therefore require significant work and experimentation in order to achieve realistic and pleasing final images."
There is no simple answer to getting the "best" HDR rendering. Therefore hobbyists will pay for tools, new tools and more new tools to let them conquer the uncertainty - that there might be a better picture somewhere in the data.
60% gross margin selling hardware with substantial COGS?
That comes largely from the defect inspection groups vs the metrology groups. Why?
Value, in this case, comes from the lack of a definitive base-line. In metrology the standard is defined: a micron is a micron. Differentiation is hard because there is a clear definition of success. In inspection, there is always the fear that there might be a better result somewhere in the data... You don't know if it's good enough.
What does HDR have to do with this?
Humans can't see the difference between 10,000 gray levels but machines can. Setting up the algorithms to properly deal with this difference is hard. For example, one aspect of tuning comes from the bit depth of the sensor. Combinations of possible parameters increase from 8 bit (multiply combinations by 256) to, say, 14 bit (multiply combinations by 16384). Increasing the bit depth increases the amount of information and the possibility that there may be good results in the data, but it also makes comparisons even more difficult, i.e. higher risk. Proving that you are the best at reducing this risk is hard. Customers pay a premium for what is hard to do.
Why are medical imaging devices constructed to such demanding specifications, particularly native bit depth (12 bit grayscale) processing and display, when the human eye can't see that many shades?
My guess (I have not seen a definitive answer): Liability - if a diagnosis was missed or rendered incorrect because of a conversion artifact or loss of data that would be bad.
The risk caused by subjective interpretation of images, particularly if those images have been modified in some way by the system, drives mitigation through rigorous technical specs (extended bit depth, color temperature control, white level balance, color matching, multiple monitor matching). Those specs increase material, installation and maintenance costs directly. They also reduce the number of vendors who can meet those specs further driving up price.
Are those specs and the associated costs really necessary? Fear (and / of lawyers) say "yes" so medical grade display systems sell at a significant premium vs commercial displays.