There was this technique the AI used when playing arkinoid where it kept the ball bouncing on the top of the screen that I had recognized from the arkinoid TAS that I had seen years earlier. That made me wonder how competitive it would be with TAS players.
An optimized AI is more-or-less equivalent to a TAS, with perhaps a few concessions depending on whether you allow it to probe into the game's internal memory banks at any time.
For physics-based games like Arkanoid, it's a fairly simple matter to completely solve the relevant equations for any possible contingency and easily keep the ball in play and direct it to particular spots. The only place this exhaustive analysis might fall short is if there's a need for luck manipulation, provided the AI isn't able to determine what the RNG seed's value is at any point in time (if it ever knows the value accurately, it can keep its "mental image" of the game state up-to-date by running a copy of the same RNG algorithm in parallel with the game itself).
In a word no. And an optimized AI is not equivalent to a TAS at all. Mainly, it can't predict the future to make a suboptimal decision now that leads to a more optimal outcome later.
The best you can get is using a dynamic programming-like solution, but that's computationally unfeasable for just about every game out there and would require it to be actually playing the game (or a complete fascimile of it), simulating such things as save states anyway. Even then, the end result's not really an AI playing the game, but creating a script, at which point actually 'playing' becomes the Chinese room problem (following set of defined state instructions, not actual intelligence).
Or I guess if a game had no randomness at all and a single optimal solution. But that would be silly.
http://www.engadget.com/2015/02/26/deepmind-atari-games-tests/
Sorry but I thought people here would have heard of this. Maybe this isn't as widely known as I thought. Also since the AI uses results from deep learning maybe that could be applied to tasing. I wonder if there is anyone who is both good with applying deep learning techniques and making tas's.
Not quite true. Theoretically it is possible to branch off different attempts for a limited amount of frames by copying the games process and test them before the real time for one frames has passed. It's limited though, predictions far in the future would need a supercomputer.
But there are numerous cases where an AI would have trouble, even with the best computer in the world, I think.
For example, finding the technodrome in Ninja Turtles for NES, wherever it appears. And that is nothing compared to actually making it appear at the optimal place to begin with.
Spikestuff, these don't fit into the criterea. They were still done with rerecording, an AI doesn't have that.
He/she/it does have but most of the articles I saw about this project is either exaggaration or bullshit. Youtube videos also doesn't showcase any intelligent AI moments (yes they can control a character, wow, much skills)
PhD in TASing 🎓 speedrun enthusiast ❤🚷🔥 white hat hacker ▓ black box tester ░ censorships and rules...
As has been said before, an external AI would have difficulty seeing how its actions would influence the game outside of the general "Are these parameters X? Then do input Y." It can has perfect knowledge of the game, but only in the moment.
To use Metroid II as an example, an AI would have difficulty judging the optimal time to shoot the first missile at a metroid that has a transition sequence, such as the first metroid of the game. Humans can already account for it because we know to expect something unusual already.
Ideally, an organic AI would be better than a simple In-The-Moment AI, but that's kind of more difficult to write.
If something happens at random and needs to be reacted to before it happens (e.g. by shooting a projectile), there's no way (barring perfect luck or rerecording) that you can react to it appropriately every time.
A TAS can get perfect luck and can rerecord, but this sort of AI can't.
Joined: 3/18/2006
Posts: 971
Location: Great Britain
Brought to you by the guy that created Theme Park (1994), at age 17.
It seems to have a lot of potential:
Link to video
The games discussion starts at 9:23.
And it seems to come up with nice strats :D
Spikestuff, these don't fit into the criterea. They were still done with rerecording, an AI doesn't have that.
For an AI to play effectively, it must be able to predict the future state of the game. It does that, effectively, by emulating the game and seeing the consequences of decisions it can make right at this moment. Humans do this too, just in a more abstract sense. It seems silly to mandate that an AI be required to re-implement the game from scratch when there's a working copy readily available.
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
Spikestuff, these don't fit into the criterea. They were still done with rerecording, an AI doesn't have that.
For an AI to play effectively, it must be able to predict the future state of the game. It does that, effectively, by emulating the game and seeing the consequences of decisions it can make right at this moment. Humans do this too, just in a more abstract sense. It seems silly to mandate that an AI be required to re-implement the game from scratch when there's a working copy readily available.
It sees the outcome of moves available to it now, it becomes harder and more complex to then have it predict all the possible outcomes for each option available to it after taking any of the actions that are available now (Exponentially growing spiderweb of possibilities, notably more complex if RNG is manipulable).
An AI would not be able to determine the optimal way to play unless every single possible outcome for every single action is worked out in advance for each RNG state at any moment and you may as well just TAS the damn thing to avoid all the effort you'd otherwise need to employ to craft such a thing.
It sees the outcome of moves available to it now, it becomes harder and more complex to then have it predict all the possible outcomes for each option available to it after taking any of the actions that are available now (Exponentially growing spiderweb of possibilities, notably more complex if RNG is manipulable).
An AI would not be able to determine the optimal way to play unless every single possible outcome for every single action is worked out in advance for each RNG state at any moment and you may as well just TAS the damn thing to avoid all the effort you'd otherwise need to employ to craft such a thing.
That's not really an AI though, and we're not talking about the theoretical most-optimal TAS. Because yeah, the only way to provably produce the most optimal TAS is to do a breadth-first search of the input space, which is physically impossible due to the exponential explosion of possible input sequences.
The question is if we could produce an AI that can do a better job of making TASes than human TASers do. And for that, you "just" need better heuristics, creativity, patience, and insight than human TASers have. Still a very hard problem, and I don't think we're near solving it, but that's just because very few people are working on the problem of writing AIs to play games. I don't think it's intractable, and I expect with our current computing power we could readily making a TASing AI if we only knew how.
Pyrel - an open-source rewrite of the Angband roguelike game in Python.