Unless if I'm missing something, I noticed that pixel perfect lossless quality is only kept on the RGB color space format.
Below is a side by side comparison (all recorded with Lagarith, then I took pics of the videos):
Is it possible to make it so the pixels will look lossless perfect on Y′UV, YUV, YCbCr, YPbPr, etc as well?
YUV444 (YV24) and RGB have pixel-perfect colors at native resolution because they're not discarding any of the color information in favor of compression.
You need to upscale at least 2x (and to an even number, preferably to a power of 2) to keep color information from bleeding around with yuv420, because of how it compresses the color information. For best result, please use point resize (nearest neighbor).
I can't remember how to cheat with the other yuv schemes, but presumably integer upscaling would work for them as well.
Just doubling wouldn't work as most colospace conversion when discarding the color info is using a scaler other than point resize. The correct way is to convert is to make sure the chroma scaler is using point resize. Like so:
Language: Avisynth
ConvertToYV12(chromaresample="point")
To add onto this, unless the later AviSynth version fixed this, the param "chromaresample" would be ignored if the source isn't already in YUV, so the real way is doing like this:
Okay, thanks.
One more thing: is there a way to do "ConvertToYV12(chromaresample="point")" through virtualdub's filters? I have avisynth installed, but I prefer to use virtualdub alone if possible.
I know how to point resize the video's resolution on "filters = resize (nearest neighbor)" but how can I resize the chroma?
Warning: When making decisions, I try to collect as much data as possible before actually deciding. I try to abstract away and see the principles behind real world events and people's opinions. I try to generalize them and turn into something clear and reusable. I hate depending on unpredictable and having to make lottery guesses. Any problem can be solved by systems thinking and acting.
If you're working exclusively in Virtualdub, nearest neighbor resize is all you need to do, followed by convert format to your desired colorspace. It should give the same result as doing a point resize followed by ConvertToYV12 (or whatever colorspace you're after). Resizing the resolution also resizes the luma (brightness) and chroma (color) components.
The problem with YV12 and YUY2 is that they're chroma subsampled. YV12 is set at 4:2:0 and YUY2 is set at 4:2:2 if I'm not mistaken. These exist for a variety of purposes, but do not really cater to pixel-oriented game videos. The issue is that in the case of 4:2:0, this means that there is only one chroma (or color) pixel for every 2x2 block of luma resolution, as shown in the picture above. This is why the black pixels in your examples remain unblurred, whilst color starts to spill into other pixels and is also why a resize with a factor of 2 solves this problem.
So assuming you have a video in RGB to begin with, you resize with a factor of 2 using point size or nearest neighbor. The point of this is to double (or quadrouple, etc) each pixel both horizontally and vertically, making each 2x2 block contain four identical pixels. This means that when the chroma is compressed down to represent the 2x2 block, no data is discarded and then no blurring/blending occurs.
Hopefully that helps you understand the situation more thoroughly. I darted around your question a bit, but the problem itself doesn't require you to resize the resolution and then the luma separately - they're linked. The luma is the brightness and the chroma is the color data and in a 4:4:4 colorspace (or in RGB) each pixel has it's own unique values.
I'm not as active as I once was, but I can be reached here if I should be needed.
This doesn't work because when you convert to YV12, the downsampling of the chroma is not done by point usually. Bilinear (I think?) and Bicubical looks more than the 2x2 block. That means the color sample might have a hint of the surrounding color with it.
To make matters worse, when you convert back to RGB as most displays are or YUV4:4:4, the upscaling of the chroma is probably not using point resize either, adding some color bleeding also...
This is the real killer for 4:2:0. It doesn't matter how smart you are when encoding; the playback software is going to stomp over that and make film-based assumptions about how to filter the chroma.
While on the subject;
Link to video
My typical editing/encoding process has me using point resize in avisynth, passing it as RGB32 to ffmpeg, and having ffmpeg encode it x264 yuv420. How much color fidelity am I losing by pushing that onto ffmpeg instead of doing this directly in avisynth?
Honestly, since you already point resized upscale, not much. There will be slight color bleeding, but not enough for anyone to really notice at 1080p since the original resolution was so small.
Edit: I just like to really do the best I can to reduce that amount, hence the lines I posted earlier.
So the larger the original footage, the more color bleeding there will be. Well, that's fair I guess. (I normally do 720p instead of 1080p, but GBx footage doesn't 2x scale to 720p)