Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
CyberShadow wrote:
Warp wrote:
A breadth-first search can check that no search branch merges with another branch.
I don't understand - how would it do that, without knowledge of all visited states?
A breadth-first search has, by definition, knowledge of all visited states so far. It never tosses anything away. Every time you advance by one step in one branch, you can check that the overall state doesn't exist already in the search tree. A pure depth-first search has no such knowledge and can repeat the same search that it has done already in the past (the only thing the depth-first search can do is to check that it doesn't end up in a state which it already reached in its current path, which would effectively become an infinite loop). It is possible to implement a depth-first search which does not throw away visited states so that parallel search branches can check if some state has already been reached and avoid going there again. But then you need the RAM and you are in no way guaranteed that the first found solution is also the optimal, so it kind of has the worst of both search methods. (There are some situations where this is a viable approach though, eg. if you are calculating some kind of cartesian product on graphs or something.)
Player (6)
Joined: 10/25/2009
Posts: 22
Warp: personally I don't see where you're coming from with this distinction between DFS and BFS. You can perform both a BFS and a DFS with no knowledge of previously visited nodes - for BFS you'd just keep a queue of nodes to visit, while for DFS the current "stack". Both will have the same complexity for a full visit of a tree, but in the case of a graph a BFS would probably get stuck in a loop, while DFS would have an insane complexity, so it doesn't really matter. I have mentioned DFS because in the case of limited RAM, DFS could be faster because it is more likely to work with a smaller set of nodes than BFS, for which a depth iteration spans the entire width of the tree. DarkKobold: that would be awesome, but I think it's best if I optimize my code some more first :)
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Obviously you cannot perform a BFS if you don't store the paths which have been checked so far because, as you say, you would easily end up in an infinite loop. Rather obviously you need to keep the full search tree which has been constructed so far in order to avoid going to branches which would form a loop. The difference between BFS and DFS is that the former can also check if a parallel search branch has already checked some state, while a DFS cannot (unless it also keeps the entire search tree in memory and doesn't discard anything). (The other difference is, as mentioned, that with BFS the first found solution is also the optimal one.)
Player (6)
Joined: 10/25/2009
Posts: 22
Warp wrote:
Obviously you cannot perform a BFS if you don't store the paths which have been checked so far because, as you say, you would easily end up in an infinite loop. Rather obviously you need to keep the full search tree which has been constructed so far in order to avoid going to branches which would form a loop. The difference between BFS and DFS is that the former can also check if a parallel search branch has already checked some state, while a DFS cannot (unless it also keeps the entire search tree in memory and doesn't discard anything). (The other difference is, as mentioned, that with BFS the first found solution is also the optimal one.)
This is turning into an argument of semantics. You say that keeping the entire search tree in memory is an innate property of BFS (which is not true for BFS over a tree) and not of DFS, while I say that there is no discernible difference because there is no practical difference between infinite and astronomical execution time, so for a practical search over a graph you need to store the search tree for both.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
CyberShadow wrote:
This is turning into an argument of semantics.
It's certainly not about semantics.
You say that keeping the entire search tree in memory is an innate property of BFS
It is for a working BFS. Obviously if a BFS ends up in an infinite loop, it's not working. It's a buggy implementation. (There are problems where checking for loops is not needed due to the nature of the search space, but in most puzzle games where repeating the same positions can happen any time, that's certainly not the case.)
and not of DFS
A DFS only needs to store the current search branch in order to avoid infinite loops. It has no requirement to store the entire search tree for it to work. That was the whole original point: DFS can work with a minimal amount of memory (even if it takes a lot longer to find a solution). Thus storing all the states found so far is a requirement for BFS but not for DFS.
while I say that there is no discernible difference because there is no practical difference between infinite and astronomical execution time, so for a practical search over a graph you need to store the search tree for both.
If you are going to store all the found states anyways, then why use DFS at all? BFS finds the optimal solution faster. As I said, DFS which never discards its history has the worst of both worlds (at least when the task is to find an optimal solution).
Player (6)
Joined: 10/25/2009
Posts: 22
This is becoming tiresome.
Warp wrote:
It is for a working BFS. Obviously if a BFS ends up in an infinite loop, it's not working. It's a buggy implementation.
Warp wrote:
A DFS only needs to store the current search branch in order to avoid infinite loops. It has no requirement to store the entire search tree for it to work.
You're saying that an algorithm that enters an infinite loop is broken while an algorithm that takes zillions of years to run is not. The complexity of DFS (with no knowledge of other nodes) over a graph is exponentially proportional to the number of loops in the graph. For a complete graph with N nodes, the complexity is N!. DFS would only be preferrable only in the unfathomable case of a very large search space (large enough for memory to be a problem) but very few loops (as each one increases the execution time exponentially).
Warp wrote:
If you are going to store all the found states anyways, then why use DFS at all? BFS finds the optimal solution faster. As I said, DFS which never discards its history has the worst of both worlds (at least when the task is to find an optimal solution).
I have already explained this when I first mentioned DFS. DFS over a search space with more nodes than available fast memory would probably be more cache-efficient than BFS, as it is more likely to have a much smaller "working set" (nodes it needs to look up). Even though it'd have to search deeper than BFS, its total execution time can be considerably faster. I've begun rewriting my solver to use DFS + caching using splay trees, but since DFS vs. BFS performance seems to be such a hot topic I think I'll take a detour and implement caching first, then benchmark it on BFS and DFS. P.S. After implementing the optimizations suggested by Tub and ZeXr0, my solver managed to solve 3-7 in 321 steps, which is quite an improvement over Nitrodon's 360-step solution! Can't wait to see how much the other solutions can be improved.
Skilled player (1652)
Joined: 11/15/2004
Posts: 2202
Location: Killjoy
CyberShadow wrote:
This is becoming tiresome.
Don't let Warp get you down. Anyway, I tried to read those articles on BFS vs. DFS, and this thread. And, they still don't make sense. Any chance you could explain them in more common terms?
Sage advice from a friend of Jim: So put your tinfoil hat back in the closet, open your eyes to the truth, and realize that the government is in fact causing austismal cancer with it's 9/11 fluoride vaccinations of your water supply.
Player (6)
Joined: 10/25/2009
Posts: 22
DarkKobold wrote:
Anyway, I tried to read those articles on BFS vs. DFS, and this thread. And, they still don't make sense. Any chance you could explain them in more common terms?
I don't think I could generally explain it better than Wikipedia with its nice illustrations: http://en.wikipedia.org/wiki/Breadth-first_search http://en.wikipedia.org/wiki/Depth-first_search In our case, the nodes represent unique states of the level (the position of the players, blocks and turnstiles, as well as filled holes). The lines connecting the nodes represent the player's actions which transition the game from one state to another (for example, moving down onto an empty square creates a new state with a different player vertical coordinate than the "parent" state). The search problem represents trying all move combinations until the game reaches the state where all players have exited the level.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
CyberShadow wrote:
You're saying that an algorithm that enters an infinite loop is broken while an algorithm that takes zillions of years to run is not.
Isn't that rather obvious? If the algorithm enters an infinite loop and never finds a solution, it is broken. If the algorithm always finds a solution, then it's not broken. Computational complexity has nothing to do with whether an algorithm is correct or broken. I don't even understand what is it that you are arguing here. It's like you are comparing a working DFS implementation (one which eventually finds a solution) with a broken BFS implementation (one which may end up in an infinite loop). What exactly is it that you are claiming? That BFS always has the possibility of entering an infinite loop while DFS doesn't? Or something else?
The complexity of DFS (with no knowledge of other nodes) over a graph is exponentially proportional to the number of loops in the graph. For a complete graph with N nodes, the complexity is N!.
That depends on the algorithm in question. Some graph searching algorithms can be performed in linear time using DFS, others have higher computational complexities. For example the problem "find a path from node A to node B in an undirected graph" can be solved in linear time using DFS. (Why? Because you can raise a flag in each node every time you recess in the DFS so that you don't consider that node anymore. Thus you will perform at most n steps during the search, where n is the total amount of nodes.) Of course that's a rather simple problem. More complicated problems (such as "find the shortest path from node A to node B") may require more complex algorithms. It really depends on the nature of the problem.
Warp wrote:
If you are going to store all the found states anyways, then why use DFS at all? BFS finds the optimal solution faster. As I said, DFS which never discards its history has the worst of both worlds (at least when the task is to find an optimal solution).
I have already explained this when I first mentioned DFS. DFS over a search space with more nodes than available fast memory would probably be more cache-efficient than BFS, as it is more likely to have a much smaller "working set" (nodes it needs to look up). Even though it'd have to search deeper than BFS, its total execution time can be considerably faster.
That assumes that both algorithms perform approximately the same amount of work in total. However, DFS might perform enormously more work than BFS in certain cases.
I've begun rewriting my solver to use DFS + caching using splay trees, but since DFS vs. BFS performance seems to be such a hot topic I think I'll take a detour and implement caching first, then benchmark it on BFS and DFS.
Note that I'm not really advocating either one. I'm just pointing out their benefits and drawbacks. The most optimal solution would be, as mentioned, to devise some kind of heuristic which would cut off most of the search tree right away (even if the overall computational complexity remains the same).
Joined: 9/6/2009
Posts: 24
Location: Renton, WA
Warp wrote:
If the algorithm enters an infinite loop and never finds a solution, it is broken. If the algorithm always finds a solution, then it's not broken.
A BFS that ignores history will still find the solution in the presence of loops in the graph (although it may do a lot of extra work). At each level, there is still a finite number of nodes, so it will eventually progress to the level containing the solution.
Player (6)
Joined: 10/25/2009
Posts: 22
Warp wrote:
CyberShadow wrote:
You're saying that an algorithm that enters an infinite loop is broken while an algorithm that takes zillions of years to run is not.
Isn't that rather obvious? If the algorithm enters an infinite loop and never finds a solution, it is broken. If the algorithm always finds a solution, then it's not broken. Computational complexity has nothing to do with whether an algorithm is correct or broken. I don't even understand what is it that you are arguing here. It's like you are comparing a working DFS implementation (one which eventually finds a solution) with a broken BFS implementation (one which may end up in an infinite loop). What exactly is it that you are claiming? That BFS always has the possibility of entering an infinite loop while DFS doesn't? Or something else?
This is why I said this is a semantic argument. If an algorithm won't solve my problem within my life-time, it's just as "broken" for me. Your original claim that keeping the entire search tree in memory is an innate property of BFS is sustained only by your claim that on graphs it would enter an infinite loop (which isn't even true, as cwitty pointed out).
Warp wrote:
That depends on the algorithm in question. Some graph searching algorithms can be performed in linear time using DFS, others have higher computational complexities. For example the problem "find a path from node A to node B in an undirected graph" can be solved in linear time using DFS. (Why? Because you can raise a flag in each node every time you recess in the DFS so that you don't consider that node anymore. Thus you will perform at most n steps during the search, where n is the total amount of nodes.)
How is that different from "keeping the entire search tree in memory" (aside not having to store the nodes' "parents")?
Warp wrote:
Of course that's a rather simple problem. More complicated problems (such as "find the shortest path from node A to node B") may require more complex algorithms. It really depends on the nature of the problem.
I don't understand how is this related. Please provide specific examples and show how they're related to this discussion.
Warp wrote:
That assumes that both algorithms perform approximately the same amount of work in total. However, DFS might perform enormously more work than BFS in certain cases.
There are no "certain cases" here other than solving Kwirk levels, and I'm sorry if you misunderstood that I was speaking generally. We already have an "upper bound" - the length of an already-known solution. As better solutions are found, the maximum search depth is reduced further.
Warp wrote:
Note that I'm not really advocating either one. I'm just pointing out their benefits and drawbacks.
I'm glad that we're done with abstract notions vulnerable to semantic disagreements.
Warp wrote:
The most optimal solution would be, as mentioned, to devise some kind of heuristic which would cut off most of the search tree right away (even if the overall computational complexity remains the same).
Yes, that would help, but it wouldn't need to be THE "optimal solution" (just one of several optimizations), and so far it's a "would be nice if". Also you probably didn't mean to use the term "heuristic", as it implies that it might not lead to the optimal result.
Skilled player (1652)
Joined: 11/15/2004
Posts: 2202
Location: Killjoy
CyberShadow wrote:
DarkKobold wrote:
Anyway, I tried to read those articles on BFS vs. DFS, and this thread. And, they still don't make sense. Any chance you could explain them in more common terms?
I don't think I could generally explain it better than Wikipedia with its nice illustrations: http://en.wikipedia.org/wiki/Breadth-first_search http://en.wikipedia.org/wiki/Depth-first_search In our case, the nodes represent unique states of the level (the position of the players, blocks and turnstiles, as well as filled holes). The lines connecting the nodes represent the player's actions which transition the game from one state to another (for example, moving down onto an empty square creates a new state with a different player vertical coordinate than the "parent" state). The search problem represents trying all move combinations until the game reaches the state where all players have exited the level.
So, a tree that will be 100+ branches deep, does this make any sense? In other words, if it is known that the solution won't happen in the first 99 branches, how is this even possible? You aren't 'checking' the solution for 100 branches, because you need the first 99. I'm still confused.
Sage advice from a friend of Jim: So put your tinfoil hat back in the closet, open your eyes to the truth, and realize that the government is in fact causing austismal cancer with it's 9/11 fluoride vaccinations of your water supply.
Active player (355)
Joined: 1/16/2008
Posts: 358
Location: The Netherlands
DarkKobold: Yeah the complete tree contains all possible 'outcomes'... most of which are not successful all the nodes at depth 1 represent the possible outcomes after 1 action all the nodes at depth 2 represent the possible outcomes after 2 actions a tree search algorithm is a way of traversing all (or preferably a subset of) the tree in an attempt to find a path (sequence of actions) that leads to a successful node DFS (Deep First): build up a plan and keep changing the ending until we've tried them all (note, if no solution is found with any action at N, alter N-1, etc) BFS (Breadth First): keep a what-could-we-have-after-N-actions database, while slowly increasing N
TASes: [URL=http://tasvideos.org/Movies-298up-Obs.html]Mr. Nutz (SNES), Young Merlin 100% (SNES), Animaniacs 100% (SNES)[/URL]
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
cwitty wrote:
Warp wrote:
If the algorithm enters an infinite loop and never finds a solution, it is broken. If the algorithm always finds a solution, then it's not broken.
A BFS that ignores history will still find the solution in the presence of loops in the graph (although it may do a lot of extra work). At each level, there is still a finite number of nodes, so it will eventually progress to the level containing the solution.
Now that you mention it, you are right. Checking for loops in a BFS would indeed save some work (and memory) but wouldn't stop it from finding the solution, as all the other branches are advancing in parallel as well.
Post subject: Re: Algorithm superiority
Joined: 12/10/2006
Posts: 118
CyberShadow wrote:
... it found better solutions for 5 levels (with the record gap being solving 2-3 in 98 steps vs. Nitrodon's 104-step solution)...
What level were those? In 2-2 you can do as low as 213 vs Nitrodon's 231 if you go for the left 3x1 block and not the right after you dump that 4x1 block in the hole. And do you minimize time or moves?
Player (6)
Joined: 10/25/2009
Posts: 22
Haven't solved 2-2 yet. I'm almost done with my new algorithm though. I minimize time (frames), as I mentioned here.
Post subject: Re: Algorithm superiority
Active player (287)
Joined: 3/4/2006
Posts: 341
thommy3 wrote:
In 2-2 you can do as low as 213 vs Nitrodon's 231 if you go for the left 3x1 block and not the right after you dump that 4x1 block in the hole.
ARGH, I can't believe I didn't notice that! I just tried this and got 205 steps, which is 238 frames shorter than the old route.
Joined: 12/10/2006
Posts: 118
There is a small improvement in 2-6: You save 2 moves without adding pushes if after you push that 1x1 to the left move it upwards first before moving the right 2x1 block.
Blue_Streak
He/Him
Joined: 5/5/2007
Posts: 12
Location: Mesa, Arizona, USA
I'm glad there's a topic about taking an algorithmic approach to TASing. I recently thought about trying to write a bot that finds the frame perfect solution to each of the 14 special stages in Sonic 3 & Knuckles. However, I don't actually know that it's doable because the problem is a more complex version of the rectilinear traveling salesman problem (TSP). The TSP itself is already in the complexity class NP-hard, which means that there's no algorithm for solving it in a reasonable amount of time for all but the smallest values....unless I buy a quantum computer. I don't even know if the approximate solutions are tractable with the computing resources I have access to.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
The traveling salesman problem is NP-hard, but finding the shortest path in a graph is not. http://en.wikipedia.org/wiki/Shortest_path_problem The difference between TSP and the shortest path problem is that the latter doesn't have to find a route through all nodes, only a route from node A to node B. What makes finding an optimal solution to a game hard is not the complexity of the algorithm, but the size of the graph. Even though the shortest path searching algorithm might be polynomial-time, the size of the graph is exponential (each node splits into n new nodes, so the number of nodes grows exponentially as you advance), so it doesn't really help.
Joined: 5/9/2008
Posts: 35
Location: USA
The TSP itself is already in the complexity class NP-hard, which means that there's no algorithm for solving it in a reasonable amount of time for all but the smallest values.
Not exactly correct. NP-hard means there's no known algorithm for solving all possible inputs in a reasonable amount of time. There are always special cases (e.g., euclidian tsp) that can be solved efficiently on large inputs in a reasonable amount of time, even on conventional computing devices.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
ntclark wrote:
Not exactly correct. NP-hard means there's no known algorithm for solving all possible inputs in a reasonable amount of time. There are always special cases (e.g., euclidian tsp) that can be solved efficiently on large inputs in a reasonable amount of time, even on conventional computing devices.
Are you sure that the optimal solution for an euclidean TSP can be found in polynomial time? Or simply a solution which is "good enough"? Remember that we are talking about absolute optimal solutions here. The wikipedia article doesn't make it completely clear, as it just says "in general, for any c > 0, there is a polynomial-time algorithm that finds a tour of length at most (1 + 1/c) times the optimal for geometric instances of TSP in O(n (log n)^O(c)) time". I understand that to mean that finding the optimal solution is still NP-hard. It's just that a very close-to-optimal solution can be found in polynomial time.
Blue_Streak
He/Him
Joined: 5/5/2007
Posts: 12
Location: Mesa, Arizona, USA
Warp wrote:
The traveling salesman problem is NP-hard, but finding the shortest path in a graph is not. http://en.wikipedia.org/wiki/Shortest_path_problem The difference between TSP and the shortest path problem is that the latter doesn't have to find a route through all nodes, only a route from node A to node B. What makes finding an optimal solution to a game hard is not the complexity of the algorithm, but the size of the graph. Even though the shortest path searching algorithm might be polynomial-time, the size of the graph is exponential (each node splits into n new nodes, so the number of nodes grows exponentially as you advance), so it doesn't really help.
Let's ignore the effects of the actual spheres for the moment and consider just points. We can phrase the entire problem thus: On a 32x32 grid G (containing 1024 nodes), find the shortest path from node A (the start) that passes through n particular nodes at least once (where n is the number of blue spheres). We can't easily define where the last node is because we don't know on which node the optimal path ends, but unlike the TSP, we are not required to return to node A and so this problem requires one less step than that problem. For the first step there are n possible nodes on G to approach (because we only care about paths that lead to a blue sphere). We can divide this step into n shortest path problems on G. The shortest path problem itself is quite easy in the ideal version of the problem because we know its solution already. Since G is a rectilinear grid we know that the shortest path to a node having coordinates {x,y} relative to the previous node is x + y units in length. The maximum value of x + y is 32 since G wraps and no node can ever be more than 16 + 16 units away. We define this x-by-y region to be R. If we assume that there is no back-tracking in our motion on R and that all moves are directed toward the desired node there are actually (x+y)!/(x!*y!) shortest paths of identical length (draw a grid and test for yourself if you'd like). However, in the real problem every turn costs us frames so the two paths along the edges of R that turn only once are actually the optimal paths. In any subsequent mention of "shortest path" I mean that which requires the fewest frames. After completing a given path there are n-1 nodes left to travel to, so we have to repeat the shortest path procedure above. Thus the problem amounts to computing n! shortest paths, meaning that we compute the shortest paths between all possible pairs of the n nodes on G. However, after knowing all these paths we have to find the sum of the n shortest path problems that is the smallest. So we have to compute n! sums and keep the smallest value. Ultimately that should be our answer in the idealized version of the problem, but we also have to consider lots of other things. Often there won't be two identical shortest paths, but one that is more complex to calculate and the frames spent turning on a blue sphere at the end of a segment to position Sonic for the next segment must also be considered. Sometimes the shortest path between two spheres will be greater than x + y units because the positioning of bumpers, catapults, and red spheres is such that you can't find a sequence of turns that gets you to the next blue sphere in x + y steps. Or, the number of turns would be so many that the frames lost in turning would be gained in a somewhat longer distance but overall faster route with fewer turns (this would have to be tested). This also includes the use of bumpers to make you go backwards. Futhermore, the problem gets worse because taking different paths through the blue spheres will mean that the state of G during each shortest-path calculation is different (i.e. some blue spheres will already have been turned to red spheres or rings depending on the prior paths taken). You'd have to compute the shortest path for all possible states of R between Sonic and the next blue sphere. For k blue spheres present initially upon R, there are 2^k possible states that they could be in, and technically more than that since some could be either rings or red spheres after being touched. So when computing the sum of the shortest paths in the final problem we're required know which of the at least 2^k different states that R is in given the previously taken paths and this will affect which path is actually the shortest. In theory we'd then have at least ((x+y)!/(x!y!))*2^k possible paths to test for each step and then know which of 2^k values to use for each step of the n! sums. This sounds terrible for us, but it's important to remember that paths with fewer turns are greatly preferred and so these are far more likely to be the shortest paths. The problem is that we have no way of knowing a priori that these paths are definitively the shortest ones and so we have to test the others too. However, there is another saving grace: Sonic can't turn on red spheres, so any possible path that requires turning on a point labeled "red sphere" can be disregarded as a possibility. What are we left with? A real mess, mainly because there's no way of knowing in advance with absolute certainly how two complete sequences though the n blue spheres compare to each other without actually calculating them. However, we may be able to place some restrictions on the calculation of n! sums to make things easier. You can't guarantee that always selecting the next closest blue sphere after completing each step will lead to the optimal answer, but you can be quite confident that always choosing the sphere farthest away will not produce the optimal answer. Thus, we can start by computing what is likely one of the shorter routes by always choosing the next closest blue sphere, and we can now use that solution as an upper limit. Any branch of sums that exceeds that length at some point along its calculation will get cut off, so we eliminate a lot of the possibilities this way. Also, if we are eliminating the longer routes between spheres in this way, then that means that we may not have to do shortest-path calculations for the spheres farther away, which of course would greatly improve the feasibility of the computation. So the best approach is probably one where we input human knowledge of a "good" solution and then use that as an upper-bound on the problem. Start by calculating a given shortest path, add it to the sum, calculate another shortest path, add it to the sum, etc., and eventually the combination is likely to exceed the length of the known solution, at which point any sequence branching off from that point will truncate and a different sequence can be initiated at a previous step. Sorry for the tl;dr post.
Joined: 5/9/2008
Posts: 35
Location: USA
Are you sure that the optimal solution for an euclidean TSP can be found in polynomial time? Or simply a solution which is "good enough"? Remember that we are talking about absolute optimal solutions here.
No, I'm not sure about euclidian TSP, and I probably shouldn't have used that example. Here's an example I am sure about: many common instances of integer linear programming (an NP-Hard problem) reduce to the max network flow problem, which can be solved optimally in polynomial time. Thus good ILP solvers will check for this special case and solve it optimally very quickly. I assume there are similar TSP formulations which can also be solved optimally without having to check an exponential number of solutions.
Blue_Streak
He/Him
Joined: 5/5/2007
Posts: 12
Location: Mesa, Arizona, USA
According to Wikipedia, a lot (if not every) restrictive case of the TSP is still NP-hard, including the Euclidean version. I think trying a branch-and-bound approach (as I suggested at the end of my previous post) is reasonable, especially because humans have an interesting way of being able to find close to optimal solutions by eye to give us an upper bound in this problem. The toughest one will be n=144 though; that's probably still unreasonable with a branch-and-bound method.