
Virtualdub Developer
  
Group: Administrator
Posts: 7773
Member No.: 61
Joined: 30-July 02

|
That is not comparable -- the scanner has to deal with gamma correction and calibration for variance in the optical pickups. Also, a 48-bit RGB setting in the scanner software doesn't necessarily mean you have 16-bits of channel resolution per pixel, since interpolation and filtering are almost certainly involved.
Approach this from an error analysis standpoint: assuming sufficient internal precision in the filters, each video filter in the chain is going to add one-half ulp (units in the lowest bit) of uniformly distributed error. For filters with unity gain -- resize, blur, etc. -- these errors will accumulate to an approximate normal distribution with the sum of the total variances from each filter. That means after applying ten filters you have an average standard deviation of 1.58, which is an error of 1.3 bits. Not significant if your analog error is already ~3 bits and especially insignificant if one or more of the filters is a noise reduction filter.
You can always claim that video will look better with higher internal precision, but such claims are useless without proper analysis of the quality of your sources and the average error produced by your filtering operations. IEEE 32-bit floating point will give you even better precision but I guarantee that no one will be willing to put up with the drop in performance. If you were to investigate the internal pathways of hardware video chips you would likely be shocked by the narrow widths used, but the reason the chips work is that the engineering staff does thorough error analysis and uses no more bits than are necessary to produce a high-quality output. |