| Printable Version of Topic
Click here to view this topic in its original format |
| Unofficial VirtualDub Support Forums > VirtualDub Development Forum > Might There Be Some Parts That Can Be Incorporated |
| Posted by: Sarreq Teryx Nov 13 2002, 06:59 PM |
| http://filmgimp.sourceforge.net/ http://slashdot.org/article.pl?sid=02/11/13/1612259&mode=thread&tid=97 I don't have linux, and that's the only release that seems to be available at the moment, so I can't test it, but might there be somethings useful in it that could be incorporated into VD in the future maybe. buzz lines: - developed by film industry "professionals"TM - has been used in Harry Potter, Cats & Dogs, Dr. Dolittle 2, Little Nicky, Grinch, Sixth Day, Stuart Little, and Planet of the Apes - Its application to feature movie productions includes the movies Scooby-Doo, Harry Potter, and Stuart Little really gives contrast to the MPAA trying to squash open source, don't it. a very interesting feature is it's ability to function in 16bit PER COLOR mode (think of the higher quality filters could function at) I know most of it's functions would have absolutely nothing to do with VD, being a multi-frame retouching tool (maybe some as filters), but there may be parts (16bit per color channel for filters) that would be quite useful in VD. |
| Posted by: Morsa Nov 13 2002, 08:09 PM |
| Useful for what?? It is nice to use 16 bits per color but you missed something......... None of us has a 16 bits per color channel monitor, or have we? |
| Posted by: fccHandler Nov 14 2002, 05:26 AM |
| I suspect that incorporating support for 16 bits per channel into VirtualDub (at this stage) would be a coding nightmare. Something to think about for VirtualDub 2.0 maybe. Will it really make that much difference? Thanx for the linx anyway; interesting reading. |
| Posted by: Morsa Nov 14 2002, 06:02 AM |
| I don´t know what others think, but, what´s the purpose of having16 bits support when you don´t have a video card and a monitor that supports it. In film industry it has sense cause they use special equipment, 16 bits avoid color banding, and well, Film recorders and color correction software work in that colorspace. Also remember that none of the digital video formats available today have such a bit depth.Even HDTV ( or CineAlta, Episode II, remember? ) has 8 bits per color channel and a color sampling of 22:11:11. The only format that has greater bit depth is DigiBeta, which has 10 bits per channel and a sampling we all know: YUV2 AKA 4:2:2. (Also remember Digibeta will soon be replaced by HDTV . There are rumours that Sony is not going to produce any new models for this format.If you take note that a Digibeta studio video recorder and an HDTV VTR cost exactly the same, and good cameras are valued more or less 70,000 vs 100,000, it makes sense ,cause you always have the option to downsample HDTV to standard resolution with a better final image quality). So, the case is that we´re most of the time working with sources with 8 bit color depth. and a crapy color sampling that has half the resolution of luma. The only interesting situation would be inside a filter to avoid some roundings, and which makes a dithering when it outputs to 8 bit. Please if someone has information about this topic tell me: What PC video card supports 16 bits per color channel. What PC monitor supports this bit depth. The only instance in which I ,sometimes, use 16 bit color depth is when I use Discreet´s Combustion to apply color correction to a video. In this case I use 16 bits cause it avoids clipping and rounding errors, and when I convert again to 8 bits Combustion makes a kind of dithering that eliminates , more or less, color banding. |
| Posted by: Spire Nov 14 2002, 06:43 AM | ||
I'm not aware of any consumer-level video cards that support 16 bits per color channel, but the http://www.matrox.com/mga/products/parhelia/128mb.cfm supports 10 bits per channel. As for monitors, any analog monitor (from the original 1987 8514 VGA monitor to the latest 24" Trinitron monster) supports infinite color depth. |
| Posted by: Morsa Nov 14 2002, 07:15 AM |
| Are you sure??? So, why some Models from Mitsubishi says ¨number of colors: infinite¨ and other models from the same manufacturer says ¨number of colors: 16,000,000 ? Thank you Spire for the information about parhelia, It´s really nice and I think I´m gonna buy it. It costs 400 bucks, more or less the best Geforce4. Do you know which Pc video cards support 16 bit per channel? |
| Posted by: Sarreq Teryx Nov 14 2002, 07:56 AM |
| It's not about what can be displayed by the hardware, it's about the precision a filter can obtain if it's able to function in a 16*16*16 colorspace instead of just 8*8*8 or even 10*10*10. even if you can't display that depth, it improves what you actually can see. it's kind of like working on an image that will end up being 1024x768, at 2048x1536, working with twice as many pixels, then resampling it down when finished, ends up giving you better results than just working on the final size of the file. or 2x AA in video games looks much better than no supersampling at all. and anyway, the radeon 9700 (and any properly DX9 compatible vid card) has a hardware 128bit floating colorspace, so if you need to be able to display the full 48bit color range, then there you go. |
| Posted by: Morsa Nov 14 2002, 08:21 AM |
| Tell me more about this, does it function that way? So you are saying that radeon really supports 16 bit per color channel and it shows 48 bit images? Or I misundertood you? |
| Posted by: Sarreq Teryx Nov 14 2002, 10:21 AM | ||
no, it supports 32bit floating point per color channel, and can show up to 128bit FP images, but since 48bits quite easily fits into a 128bit FP colorspace, it'll show those also. this applies to the radeon 9700, 9700pro, AIW9700, and if they want to be fully directx9 compliant, then any new video card coming out this upcoming year, including the NV30, but not the parhelia, since it only supports up to 10bits per color, or the radeon 9000, but that's only meant to be DX8.1 compliant anyway. |
| Posted by: Morsa Nov 14 2002, 09:05 PM |
| So If I install a Radeon 9700 I should have under display options in Windows a 48 or 64 bit color option, Am I right? (just to be sure) |
| Posted by: phaeron Nov 15 2002, 04:07 AM |
| Quantization error from N-bit depth components is not the same as aliasing from point-sampling the screen in 3D rendering. 2xAA is roughly analogous to only adding one bit to sampling depth, which is not significant at all. My subtitler filter uses 64xAA (8xH/8xV) and that is still only produces 6-bit output. The major reason IIRC to use 16-bit components is that you need the extra precision on the low end to be able to produce full 8-bit gamma corrected output, but that is of little concern if your input device has already gamma corrected input in 8-bit space and discarded precision on the low end anyway. Remember, too, that just because data between filters is passed in ARGB32 doesn't mean that all pixel arithmetic must be done internally at that resolution too. The resize filter's separable polyphase filters use 18:14 fixed-point arithmetic, and most of the other simple FIR-based ones are 8:8. There is another annoyance with 16-bit components: differences become 17-bit. 8-bit components are easy to handle in MMX because you just unpack to 16-bit and use that word size to handle the 9-bit differences, but 17-bit and higher multiplication is a real hassle in MMX. |
| Posted by: fccHandler Nov 15 2002, 05:03 AM |
| @phaeron: Whoosh, that went right over my head! I guess that's why you're the master. |
| Posted by: Sarreq Teryx Nov 15 2002, 08:53 AM | ||
I would guess so, I don't have one to know for sure |
| Posted by: Morsa Nov 16 2002, 01:44 AM |
| Someone could answer this question with more detail??? And also if having 10 bit instead of 8 makes a big difference. |
| Posted by: Sarreq Teryx Nov 16 2002, 08:38 AM |
| OK, everyone else here with a scanner tell me this, what looks better, scanning at normal 24bit RGB color (the normal COLOR mode on canon scanners) or use it's full color space, whether it be 30bit RGB, 32bit RGB(10+11+11, whatever order it does it), 48bit RGB, whatever (HIGH DEFINITION COLOR on canon scanners), cause I know with my scanner at 48bit color, it looks a hell of a lot better than in normal 24bit color mode, even when the program recieving the output can only handle 24bit RGB/32bit RGBA color. my point is, the higher the bitdepth the color whilst processing, the better the output, whatever the output bitdepth may be. that's why most (slightly older) professional TV and movie production programs work with 10bit per channel, and newer in 16bit per color, while processing, then downconvert it to whatever their ouptut will be (unless it's getting printed to film), they migh not be distributing 48bit color but if they were to process the video in only 24bit color, it would NOT look as good. If VD were able to pass 48bit color, I think it would improve the quality the filters can output, whatever the bitdepth the final codec works in.
|
| Posted by: phaeron Nov 16 2002, 09:35 PM |
| That is not comparable -- the scanner has to deal with gamma correction and calibration for variance in the optical pickups. Also, a 48-bit RGB setting in the scanner software doesn't necessarily mean you have 16-bits of channel resolution per pixel, since interpolation and filtering are almost certainly involved. Approach this from an error analysis standpoint: assuming sufficient internal precision in the filters, each video filter in the chain is going to add one-half ulp (units in the lowest bit) of uniformly distributed error. For filters with unity gain -- resize, blur, etc. -- these errors will accumulate to an approximate normal distribution with the sum of the total variances from each filter. That means after applying ten filters you have an average standard deviation of 1.58, which is an error of 1.3 bits. Not significant if your analog error is already ~3 bits and especially insignificant if one or more of the filters is a noise reduction filter. You can always claim that video will look better with higher internal precision, but such claims are useless without proper analysis of the quality of your sources and the average error produced by your filtering operations. IEEE 32-bit floating point will give you even better precision but I guarantee that no one will be willing to put up with the drop in performance. If you were to investigate the internal pathways of hardware video chips you would likely be shocked by the narrow widths used, but the reason the chips work is that the engineering staff does thorough error analysis and uses no more bits than are necessary to produce a high-quality output. |