tom_mai78101
He/Him
Player (127)
Joined: 3/16/2015
Posts: 160
Randomno wrote:
Gave it a quick try. I had no idea what to set the probability controls to. I wanted to use as few left presses as possible, so set right and B to 65, A to 25, and left to 5. Main value of the highest X position and 100 frames. It had good results after 75 generations. I kept it going until 180 generations and it hadn't improved. I was able to improve it by a few frames by manually editing the input afterwards to remove a few left presses. My initial attempt had 715 xpos, the bot found 730, and my changes afterwards got it to 741. So a good result for just a few minutes of work, and saved me figuring out a frustrating section to optimise.
Thanks for your testimonial. And sorry about the lack of DLL ZIP file on the Github release. The DLL ZIP file is now available there.
tom_mai78101
He/Him
Player (127)
Joined: 3/16/2015
Posts: 160
NhatNM wrote:
How about if let BOT will learn from a sample movie file? User will make a run movie from A point to B point, then BOT will learn about needed inputs and destination target, then try make it done fast as possible
I just now re-read your question. What you are referring to is called Reinforcement Learning, in the field of supervised learning, which is a completely different discipline than genetic algorithms. My bot doesn't do this. Sorry, but you will need to find other ways to do this.
tom_mai78101
He/Him
Player (127)
Joined: 3/16/2015
Posts: 160
Made a pre-release for an experimental build that has recurrent NEAT algorithm applied. https://github.com/tommai78101/Bizhawk-GeneticAlgorithmBot/releases/tag/neat-1.0.4 Recommended to use the latest dev build to try it out. If it doesn't work, you may want to pull the code changes from this pull request, then pull it into a BizHawk fork, before rebuilding it. https://github.com/TASEmulators/BizHawk/pull/3723