For PSX videos, please use the screen resolution most commonly used in that particular movie.
As you might know, PSX can utilize a number of different resolutions:
Width of 256, 320, 512 or 640
Height of 240 or 480
In Chrono Cross, 3D scenes are usually 320x240; status screens are 512x480, FMVs are 256x240 and the BIOS screen is 640x480.
I think most commonly PSX videos should be encoded at 320x240.
It will cause downsampling for screens of higher resolution than that, but one solution cannot accomplish everything.
(For MKV and MP4, it would be possible to dynamically change the video resolution, but I think it's not safe to use that.)
For screenshots, please use whatever resolution that particular scene is in.
----------
(This message is not to complain that anyone has done wrong; I'm just giving instructions to be clear.)
I would like to suggest that the guideline be changed from "used most often" to "presents clarity". For example, if a game uses 320x240 except during cutscenes, where it shifts to 640x480, I don't want the text to suddenly become unreasonable.
Not that this would have that great of an impact; but possibly for movies that have a significant amount of switching (such that no resolution is dominant), the video can be letterboxed and centered, or resized to fit the resolution.
*shrug*
Is there anything special that an encoder/publisher should look out for when trying to do so? I'm already getting strong warnings when the aspect ratios of files (unset vs 4/3) do not match. Maybe mmg is doing something wrong? Do you happen to have an example file handy?
About the compatibility: How about publishing both files? Avi with a fixed resolution for "regular" users and mkv with changing resolutions for the "technically experienced" ones?
I'm already looking forward to hatred filled posts. Keep 'em coming!
Might cause confusion, and I'm not sure if it's supported, or even easy to encode. Just because it is possible with the container/codecs doesn't necessarily mean that it can be done with any level of usefulness.
As soon as mplayer supports it, I'd be willing to do it. To my knowledge, it runs fine on Linux, Mac and Windows. For people running different OSs (not sure about the *BSDs), there's still an avi file to go with.
I would add, although it might go without saying, but just in case: If you need to change the video resolution from the original game to the encoded video, use the best possible image scaling filter available (both if scaling from a larger resolution to a smaller one and the other way around).
Just because you are scaling eg. a 640x480 FMV to 320x240 doesn't mean it has to look bad. With proper scaling filters it can look acceptable, without relevant loss of visual information.
Btw, why couldn't the maximum resolution used in the game be used for the video encode, scaling all the smaller resolution parts to that larger size?
No, it will not make the video file larger. The size of the video file is determined by the bitrate used, not by the resolution of the video. And no, using a larger resolution with the same bitrate does not decrease the quality of the video. (In fact, my experience tells it's the opposite: Using a larger source image resolution can actually increase the final video quality with the same bitrate, compared to making a video of half the resolution.)
I've seen different resolutions used sometimes, such as Ghost in the Shell's height of 216:
And Castlevania: SotN's width of 368:
When you do that, you miss most "i"s and "l"s in Castlevania: SotN opening story, which lasts a few minutes; and it also makes other letters difficult to read (the resolution at this screen is 512x240).
Most 3D games on the PS use 512x240, which makes screenshots look really bad:
I mean, they don't look "bad", but the games in reality don't look like that. It might break the website layout too...
I think that's the best solution.
Also, mkv with changing resolutions might be a bad idea... I wouldn't like to watch a video at 512x240 (in case we're planning to use the original resolutions and not 4:3 scaled).
Really?
See this example. http://bisqwit.iki.fi/kala/512x240.avi (~1 MB)
Note that modern players do aspect ratio correction based on the aspect information indicated in the movie file.
Cool, I didn't know that. Ignore what I said then. :P
(By the way, GOM still show it as 512x240, but I guess most people don't use this player anyway. It works fine on VLC and MPC.)
You are obviously encoding them wrongly because it contradicts my experience.
Besides, you should use a more realistic upscaling to 640x480 rather than going overboard just for the sake of trying to prove me wrong.
Yeah, it must be my mistake, because you're perfect, and never make mistakes. I'm not "encoding them wrongly" in any way.
What does it even matter? Do you think upscaling will help when upscaling to 640x480, but not to 1280x960?
Stop being full of yourself. The reason you didn't see a quality loss when upscaling was probably that you used a high bitrate. This site uses low bitrates.
Warp, what you're talking about might not be a quality increase due to upscaling, but a quality decrease due to improper stretching of a downscaled video to fullscreen. If you use common sense you'll see that 320x240=76800, while 640x480=307200 pixels, which is 4 times more pixels shown on each frame of a video. No, of course it doesn't explicitly tell that such a video will necessarily consume four times more bitrate, it just means the codec has to compress 4 times as much data overall. There's no way it's going to be more efficient than otherwise, regardless of your experience. Math doesn't work on personal level.
Now there are multiple ways of upscaling a low resolution video on decoding level, which may or may not look better than using an encoding filter to upscale one.
The mpeg stream is not storing individual pixels. It's storing information about those pixels.
If you upscale a 320x240 image to 640x480, the amount of information in the image has not increased at all. The number of pixels has quadrupled, and the in-between pixels may have been interpolated from the original pixels, but the amount of information conveyed by them has not increased. Even with lossless compression methods you could compress the resulting 640x480 image into the same size as you could compress the original 320x240 image (because, once again, the upscaled image has no additional information compared to the original). You just have to choose the compression technique appropriately.
Additionally, mpeg4 searches for shapes in the video to be compressed. If these shapes are represented by vectors, the scale of the shapes doesn't matter.
Yes it has. Now you're not just storing the information about original pixels, you're also storing the information about interpolated pixels. The amount of useful information has not increased, but there's redundant information now, which also has to be stored. The only reason it doesn't require proportionally higher bitrate is that the information density is lower now. (Similarly, downscaling a picture to 1/4 of its resolution will commonly require a higher bitrate than original/4.)
May I require an example, together with resulting video files? That will alleviate all the possible confusion (and don't you dare cop out after starting this debate).
It's searching for shapes in raster image. Because the image is still raster, not vector graphics. They are absolutely incomparable in efficiency, speed, compression ratio, scaling quality, everything, and no matter how advanced motion recognition algorithms are, they won't ever be close to actual vector graphics.
I apologize, but I still find this extremely hilarious:
You are basically saying above that, according to "math", a file containing 307200 pixels cannot be compressed to be as small as when compressing a file containing 76800 pixels. Of course that is simply ludicrous. The amount of pixels has absolutely nothing to do with how small they can be compressed (even losslessly). The relevant thing is how much information there is in the file.
An image which has been scaled up to quadruple size eg. by linear interpolation of the in-between pixels does have a bit more of information than the original image. However, the amount of information is a fixed amount (basically "each other pixel is the average of the surrounding pixels") and independent of the resolution. The larger image could still be (losslessly) compressed to the same size as the original image (give or take some bytes; I'm talking about magnitudes rather than exact byte counts) because there's no added per-pixel information.
If you are unable to get the same image quality for the same bitrate for an up-scaled video, it can only mean one of two things:
1) The video compression format is not smart enough to see the interpolated pixels and take this into account, or
2) you are not using the proper compression options.
My experience is that at least some codecs can produce image quality of equal quality with the same bitrate.
When you climb down from your cross, note that it was you who:
a) suggested something from the height of your experience;
b) argued that Johannes (who has already proved his experience with public encodes) was wrong, while the only proof of your claim was your experience; and
c) never showed anything to back up your claims of experience.
Do you really believe it's alright to claim that people are wrong without proving it and expect everyone to take your words for granted? If it is so, I'm afraid you need to grow up a bit.
Do you really believe it's alright to claim that people are wrong without proving it and expect everyone to take your words for granted? If it is so, I'm afraid you need to grow up a bit.
Before bashing other people for no reason, you should actually look at yourself and your own mistakes at first, smarty. Just a suggestion.