I was trying a little bit of the same... but ran into trouble :P
basically what an Agent gets is 'observation' information, which describes per tile what it contains...
converting this back into usable data (such that the game can be simulated to predict anything) is both inaccurate and tedious so simulating anything (for instance to evaluate future states) is very annoying
I'm really curious how (or, if) the author of that A* agent did this (I don't see how an A* algorithm could work without simulating the game tbh... if someone does inform me)
edit: in the
this movie can be seen how mario sticks to a spikey.... rather than staying out of the spikey's tile... so somehow his AI 'knows' more than just tile-by-tile information
edit2: what I'm saying is that either the author didn't make an agents based on the provided Agent interface, or I'm missing a crucial point somewhere
edit3: that was the case, but now the competition's code got changed making exact accuracy possible :)