| Printable Version of Topic
Click here to view this topic in its original format |
| Unofficial VirtualDub Support Forums > VirtualDub Filters and Filter Development > Intel Compiler Error |
| Posted by: jpsdr Oct 15 2011, 10:49 AM | ||
| I've the following problem with last Intel compiler : >int128.cpp(40): error : no suitable user-defined conversion from "vduint128" to "const vdint128" exists 1> return (q[1]^x.q[1])<0 ? -(const vdint128&)bd : (const vdint128&)bd;
|
| Posted by: jpsdr Oct 17 2011, 07:32 AM |
| So, of course, question is : How correct it ? Strange thing is that until the last version, it compiled without problem... |
| Posted by: IanB Oct 17 2011, 09:10 PM |
| The compiler is right but is being overly pedantic. You can probably deceive it with some & and * magic. (const vdint128&)(*((vdint128*)&bd)) |
| Posted by: jpsdr Oct 19 2011, 07:33 AM |
| Thanks, it worked. |
| Posted by: phaeron Oct 23 2011, 07:31 PM |
| I don't understand why the Intel compiler is trying to issue a value cast here. C-style casts are allowed to do a reinterpret_cast and a diagnostic can only be issued if the cast is impossible or ambiguous. There are no conversions between vdint128 and vduint128 that would allow an ambiguous cast situation. |