I'm not actually sure what the cause of that is. It might be that the compressor read the state file before it was fully written. There's not a lot of these, but I might have to re-run it again, making sure that the filesize is correct before reading.
I don't want to interact directly with Gens because I am running it in wine, and.. building 32-bit windows executables? That link to OpenGL? On Linux? No thanks.
Also, that lets me fast-forward much faster because qbsdiff decompression is a ton faster than emulation.
I just set tails' position to sonic's position when tails is either at 0/0 or has x > 0x7000. Good enough for me.
The issue with displaying the background together with the smooth camera is that the BG is inherently tied to the current camera position. If I were to try to display the background I'd have to emulate the behaviour for all animated tiles, which is not something that I have the time for.
The glitchy (e.g. boss) sprites are still there, but they're not shown: I'm reading the object list at a specific point in time, when sprites are drawn: https://gogs.selic.re/x10A94/atlas-s3k/src/master/dumpframes.lua#L26 - this also means that lag doesn't screw up how sprites are drawn, since they always use the relevant info rather than "next frame" info.
I'm currently experimenting with showing a short range of objects with known mappings. In the final encode, they will likely be shown at all times. The only problem I have is that the respawn table data doesn't match up with the object data, so there's a slight discontinuity. This'll likely require me to re-bake all savestates (which, btw, I've delta-compressed with qbsdiff to take up around 400MB rather than 209GB).
Oh! I thought you were referring to the world boundary, i.e. the death plane and the camera scroll stop points. The reason why I didn't add this is because of how frantic the screen actually is; you can see that with e.g. the score tally in Ice Cap 2. It makes things a bit distracting. I'll definitely draw it when you are sufficiently offscreen, though.
Thanks for the kind words!
So fun thing is, I kinda didn't wanna add it because as of right now the entire viewport is drawn with the same GLSL shader - e.g. everything including the menu is drawn using paletted tiles. Adding something like a border would require me to write a new shader, which - it's a good idea! I just wanted to push it out as fast as possible.
How do you think I should draw it? Fully shade the unreachable area, or, say, use a gradient line? I've been thinking of also drawing the camera boundary if sonic was sufficiently outside of it.
Link to video
So this is a full rewrite to make this thing realtime. I've changed a bunch:
* Plane B priority tiles are shown when necessary
* Added a HUD showing some technical info and ring counts
* Made the camera movement interpolated - there are some silent teleport issues but I know of a workaround
* Objects no longer flicker when on screen
* The camera is no longer constrained to level boundaries and is able to show objects outside of levels - doesn't render loopback tiles yet, but I might change that
* Removed the checkerboard background because it unsurprisingly murdered the bitrate
Things left to do:
* Render out-of-bounds chunks when necessary
* Splitscreen for tails when the distance between S&T is too big
* Account for compound sprite X overflow a bit better
* Maybe use vertical offset-per-tile data? I don't really know how I'd implement that
* Make the final boss and ending not horrifyingly broken
Sorry about the delay! Real life stuff got ahold of me. I'll try to work on it this month; I feel like even if it's not perfect, it'll be received well enough that we don't need to bother with all of the edge cases, and I'd be able to make a better version if I have the time and energy to put into it.
Yeah, that's kinda what I meant - I won't have any intermediate steps, it'll dump in 4K straight away.
The fact that he goes semiopaque is because every object is semiopaque while not being rendered; I feel like adding more special cases is gonna make the code look pretty terrible. Hell, making it semiopaque to begin with already makes it rather complex, but it is a bit important because that lets you easily see why you can pass through objects at that particular time.
I might switch it to being e.g. desaturated if it's deemed too distracting.
Oh, and about the NUT encoding: I'll definitely try that, just in case image2pipe has some weird perf issues.
Thank you! This will be very useful.
I'll render the final output in 4K in order to remove the 4:2:0 chroma subsampling. 8K would probably be overkill, unless someone else does it for me (prepare to spend 200GB on temp files!)
Yeah, sorry for not keeping the codebase commented - I might have to do that for most of it so that others could learn and contribute.
The intermediate images are stored in a hashmap since they could arrive out of order, since they are rendered in a thread pool. They are then removed from the hashmap when there's enough to go in order. Rust has move semantics, meaning that when you have ownership of some data, you are the sole owner, and it is guaranteed that dropping the value by moving it out of scope will deallocate the buffer, unlike in e.g. GC'd languages.
Well, it's nowhere near finished - it's definitely just the draft and I'll need to polish it a lot more.
https://hyper.is-a.cat/gogs/x10A94/atlas-s3k/src/master/src/main.rs#L244
As of right now, this is the code that accepts the images being encoded in a thread pool and uses a BMP encoder for the images, which pipes the output into ffmpeg.
Actually, now that I think of it, I could probably squeeze out some more performance by using an RgbImage as the target, since then the encoder would not have to do much to actually encode it.
Now that I think about it, I'm pretty sure I'll manage. I'm currently using the veryfast preset to improve framerate, but if I wanna make it final I might as well use veryslow, which would improve compression significantly. I think the biggest bottleneck is just piping the data - I think it would be more efficient to use ffmpeg directly as a library, but sadly I have absolutely no idea how to do so.
If someone has any sort of idea on how to feed the ffmpeg library a bunch raw image buffers and get a video out of it, I'm all ears, the project is open-source: https://hyper.is-a.cat/gogs/x10A94/atlas-s3k/
Link to video
I got the encode to the point where it's possible to render most of the game with negligible artifacts. At some point in the future I'm gonna have to make it so that it's uploaded to youtube while it's encoded into 4k, because I simply do not have the space.
Still not sure how to handle backgrounds. For now it's a checkerboard. Might be a bit jarring, and I might make it bigger.
Since the publishing of the movie is fairly soon, I decided to work some more on the Atlas encode. This time with a dynamic camera that needs a bunch of fixes to make transitions not as jarring.
Link to video
I've tried to run Marble Marcher, the new game that uses fractal geometry for level layout, but it looks like it doesn't quite replay inputs correctly, and after 2-3 frames of playback it starts using real inputs instead of emulated ones. The game's open-source, so it'll likely be much easier for you to understand what's going wrong. TAS runs of this game would likely be extremely impressive due to how precise the collision in this game is.
Link to the game: https://github.com/HackerPoet/MarbleMarcher
Yeah, your mod is great for realtime playthroughs, but for a TAS like this where the camera must be dynamic to make sure that you actually see where both characters are on the level, it should account for future frames as well, which is not very possible when running a realtime renderer.
It's also pretty crucial to be able to hardcode things at certain points in the run, since it's unlikely that I'd be able to nail the presentation for everything in the entire game.
Here's my code: https://hyper.is-a.cat/gogs/x10A94/atlas-s3k/src/master
I did not want to keep this specific to Gens, considering there's plenty of Bizhawk runs out there, or specific to Genesis, even. Thus, it'll be possible for me to create Atlas encodes of other games, even if they're slightly less polished than those done with video editing. As such, the most common denominator that I could get to work on Linux was.. creating a savestate for each frame, plus some mid-frame data. This isn't very optimized, but it really doesn't need to be, given that it creates a 1080p video at around 15fps on my machine, which is fast enough for debugging.
If you somehow get it to work on windows, I'll put it on github so you can make a pull request.
Speaking of which, here's HCZ2: https://www.youtube.com/watch?v=sxzHItPChgQ
It's still not perfect, and I still have yet to find out how to hide the HUD more efficiently as well as draw the plane B data, but it does show how Tails is used offcamera in most places.
Moving forward, I'm going to change it to focus on the interpolated midpoint of Sonic and Tails, so that way Tails doesn't end up off-camera.