Depends on the codec. x264 won't catch them well enough. There is sizable difference between dedupped and non-dedupped sizes if run has lots of duplicate frames.
What do you mean by caught? If you mean they detect a duplicate frame, but encode it anyway with some small amount of data, I guess you are correct. But if you meant the duplicate frame is dropped, then you are incorrect, since it is not. There are other benefits of removing duplicate frames. For example, H.264 has a maximum reference size of 16 frames ahead. If the duplicate frames were left there, then the amount of reference frames is actually smaller. Also while the encoded duplicate frames are quite small, they still do take space. Basically, in general, the dedup encode is smaller because we remove duplicates.
Joined: 10/28/2007
Posts: 1360
Location: The dark horror in the back of your mind
Not true, in x264's case. Allow me to illustrate.
(Most of the files I refer to in the following are available here, for the curious.)
I made this image while [thread 11511]testing media player colour spaces[/thread], and converted it into a ten second, 60fps test clip in FFV1/BGR32 (cc10.avi).
From here, I applied two workflows.
First, I used ffmpeg to directly convert the AVI into a raw YV12 input to be used with x264:
ffmpeg -i cc10.avi -pix_fmt yuvj420p -vcodec raw -f rawvideo cc10raw.yv12
I then encoded this losslessly with x264:
x264 --fps 60 --input-res 256x256 --demuxer raw cc10raw.yv12 --crf 0 --fullrange on -o cc10nodedup.mp4
For the second, I used my [wiki DedupC]dedup filter[/wiki] in conjunction with ffmpeg to generate a dedupped raw YV12 for input to x264:
x264 --input-res 256x256 --demuxer raw cc10dedup.yv12 --crf 0 --fullrange on --tcfile-in times.txt -o cc10dedup.mp4
The resulting cc10dedup is 26047 bytes, and the resulting cc10nodedup is 87956 bytes. There's a clear benefit.
EDIT: In the interest of full disclosure,