YV12 has chroma subsampling. Pointresize( *2, *2) before the color resampling will help preserve the information contained in the RGB original.
CRTs generally don't have square pixels. The NES renders a 256x224 color field which gets analogly spread onto a 4:3 screen. Stretching to 298x224 is a lossy process, but pointresize( *14, *12) isn't.
Youtube doesn't let you choose your upsizer when viewing fullscreen. Knowing that a truly pointresized option is availible at all resolutions is comforting to some. Others like to show off how shiny hqx/nedi makes things, while the rest of us gag.
Youtube doesn't always honor your resolution when it doesn't fall exactly on 240, and letting youtube decide how to handle resizing is pretty scary.
Pointresize( *14, *12) seems crazy, but with pristine sources and lossless encoding it compacts incredibly well. Lossy encodes will oddly be much bigger in most cases.
Point taken. I didn't take into account how awfully small resolution some of these early game systems have.
I don't find it strange that lossy compression movies would be bigger, though. I am more surprised that lossless codecs can compress them so well.
Are these lines important for an HD encoding that I dont want to upload on Youtube, what are they mean ?
ConvertToYV24(chromaresample="point", matrix=(hd ? "Rec709" : "PC.601") )
ConvertToYV12(chromaresample=(hd ? "point" : "lanczos"), matrix=(hd ? "Rec709" : "PC.601") )
pass == 1 ? DupMC(log="dup.txt") : last
pass == 2 ? DeDup( \
threshold=0.00001, trigger2=100, show=false, dec=true, decwhich=0, \
maxcopies=20, maxdrops=20, log="dup.txt", times="times.txt" \
) : last
First two lines are for maximizing visual quality. x264 typically throws away some color information when compressing, but I'd argue this is only for purists. Nevertheless, your mileage may vary. Try one encode with them and one without and see if you can spot any difference.
The last lines eliminate duplicate frames. This will save bitrate and reduce the size of your encode. It probably won't save a lot, though.
What does confuse me, though, is the two passes. x264 can be done in one pass, but perhaps DeDup needs two passes? If so, you can save a lot of time by removing those lines (at the expense of some bigger files).
That is internal AVISynth architecture limitation. The architecture requires streams to be seekable, which necessitates two passes (first to get what frames are "identical" and second to actually drop the frames).
Actually, with modern encoding methods (especially for N64) one needs three passes (both 2nd and 3rd are done with 'pass=2'), because x264 requires complete timecode file right at the start (or at least used to).
Dedicated dedup filters (like dedup.c) can do this stuff in one or two passes (vs 2 or 3 required by AVISynth).
Only, for downloadable HD, which I think isn't actually used (dedup is hostile to flash and to streaming sites). For normal downloadables (which are currently the only thing using dedup) the first pases are done with very fast settings (usually getting hundreds of fps) and only the last one is slow.
You get some error about dedup.dll not being found or somesuch? As far as I know, that is usually caused by not having the libraries dedup.dll itself requires.
If hd is true, these two are precisely the same thing (this can be seen by simplifying the code with assumption that hd = true.)
When it comes to AVISynth script, the resolution computation is different for handhelds, otherwise there is no difference.
Hypothetical handheld with 256x224 resolution would give 1536x1344 HD resolution, where non-handheld of the same resolution (NTSC (S)NES) gives 3584x2688.
This is the scaling/AR precorrection code. From looks of it, it uses the older method for HD.
What is better for HD encode -- fullrange on or off
-- qp 0 or crf 0
and should i set --ref to 16 ? or is --ref not important for HD ? i read that this increase the Level ?
please help :)