What are the limitations of the hill climbing algorithm and how to overcome them?
up vote
7
down vote
favorite
What are the limitations of the hill climbing algorithm? How can we overcome these limitations?
algorithm search problem-solving
add a comment |
up vote
7
down vote
favorite
What are the limitations of the hill climbing algorithm? How can we overcome these limitations?
algorithm search problem-solving
add a comment |
up vote
7
down vote
favorite
up vote
7
down vote
favorite
What are the limitations of the hill climbing algorithm? How can we overcome these limitations?
algorithm search problem-solving
What are the limitations of the hill climbing algorithm? How can we overcome these limitations?
algorithm search problem-solving
algorithm search problem-solving
edited Nov 17 at 11:31
nbro
768415
768415
asked Nov 15 at 15:03
Abbas Ali
1426
1426
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
up vote
1
down vote
accepted
As @nbro has already said that Hill Climbing is a family of local search algorithms. So, when you said Hill Climbing in the question I have assumed you are talking about the standard hill climbing. The standard version of hill climb has some limitations and often gets stuck in the following scenario:
- Local Maxima: Hill-climbing algorithm reaching on the vicinity a local maximum value, gets drawn towards the peak and gets stuck there, having no other place to go.
- Ridges: These are sequences of local maxima, making it difficult for the algorithm to navigate.
- Plateaux: This is a flat state-space region. As there is no uphill to go, algorithm often gets lost in the plateau.
To resolve these issues many variants of hill climb algorithms have been developed. These are most commonly used:
Stochastic Hill Climbing selects at random from the uphill moves. The probability of selection varies with the steepness of the uphill move.
First-Choice Climbing implements the above one by generating successors randomly until a better one is found.
Random-restart hill climbing searches from randomly generated initial moves until the goal state is reached.
The success of hill climb algorithms depends on the architecture of the state-space landscape. Whenever there are few maxima and plateaux the variants of hill climb searching algorithms work very fine. But in real-world problems have a landscape that looks more like a widely scattered family of balding porcupines on a flat floor, with miniature porcupines living on the tip of each porcupine needle (as described in the 4th Chapter of the book Artificial Intelligence: A Modern Approach). NP-Hard problems typically have an exponential number of local maxima to get stuck on.
Given algorithms have been developed to overcome these kinds of issues:
- Stimulated Annealing
- Local Beam Search
- Genetic Algorithms
Reference Book - Artificial Intelligence: A Modern Approach
There are more alternatives available to overcome the problems of hill climbing; namely permutation groups, pattern databases and grammar based search. They are using domain-specific knowledge for search faster in the state-space.
– Manuel Rodriguez
Nov 15 at 16:40
Yes @ManuelRodriguez. Algorithms dependent on Domain-specific knowledge gives excellent results. But I tried to keep the answer to generic problems, mentioning few of the ways that can overcome limitations of Hill Climb Search.
– Ugnes
Nov 15 at 16:45
add a comment |
up vote
5
down vote
Hill climbing is not an algorithm, but a family of "local search" algorithms. Specific algorithms which fall into the category of "hill climbing" algorithms are 2-opt, 3-opt, 2.5-opt, 4-opt, or, in general, any N-opt. See chapter 3 of the paper "The Traveling Salesman Problem: A Case Study in Local Optimization" (by David S. Johnson and Lyle A. McGeoch) for more details regarding some of these local search algorithms (applied to the TSP).
What differentiates one algorithm in this category from the other is the "neighbourhood function" they use (in simple words, the way they find neighbouring solutions to a given solution). Note that, in practice, this is not always the case: often these algorithms have several different implementations.
The most evident limitation of hill climbing algorithms is due to their nature, that is, they are local search algorithms. Hence they usually just find local maxima (or minima). So, if any of these algorithms has already converged to a local minimum (or maximum) and, in the solution or search space, there is, close to this found solution, a better solution, none of these algorithms will be able to find this better solution. They will basically be trapped.
Local search algorithms are not usually used alone. They are used as sub-routines of other meta-heuristic algorithms, like simulated annealing, iterated-local search or in any of the ant-colony algorithms. So, to overcome their limitations, we usually do not use them alone, but we use them in conjunction with other algorithms, which have a probabilistic nature and can find global minima or maxima (e.g., any of the ant-colony algorithms).
Nice answer (+1)! Can you recommend a resource (YouTube, blog post, arxive paper, book) to learn about ant-colony algorithms? I've never heard about that and would like to get a rough idea of them.
– Martin Thoma
Nov 15 at 16:08
1
@MartinThoma I am afraid I really don't know of a very nice tutorial on ACS. Maybe you can start with the following brief tutorial and the corresponding implementation: cleveralgorithms.com/nature-inspired/swarm/…. If you're also interested in a more serious implementation, applied to the TSP, then have a look at this one: aco-metaheuristic.org/aco-code, implemented by Stützle (and others), one of the contributors to the development of these techniques.
– nbro
Nov 15 at 16:16
The asker knows, what the formal definition of hill climbing is because he has read the Wikipedia article. The question goes more into the direction of how to use it for Artificial Intelligence. It's known, that Hill climbing can only search in local space, which makes it difficult for AI related problems. Usually the search gets stuck in a local optima which means that the shortest route in the Traveling salesman problem can't be found.
– Manuel Rodriguez
Nov 15 at 16:16
1
@MartinThoma Anyway, you can also have a look at the research papers. I can only tell you a few important researchers: Dorigo (the first one that introduced these techniques, AFAIK), Gambardella and Stützle. Look at their papers. I am not sure which one to suggest. Also, there's a book dedicated to swarm intelligence (by Bonabeau), if you're really want to go into the details.
– nbro
Nov 15 at 16:16
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
As @nbro has already said that Hill Climbing is a family of local search algorithms. So, when you said Hill Climbing in the question I have assumed you are talking about the standard hill climbing. The standard version of hill climb has some limitations and often gets stuck in the following scenario:
- Local Maxima: Hill-climbing algorithm reaching on the vicinity a local maximum value, gets drawn towards the peak and gets stuck there, having no other place to go.
- Ridges: These are sequences of local maxima, making it difficult for the algorithm to navigate.
- Plateaux: This is a flat state-space region. As there is no uphill to go, algorithm often gets lost in the plateau.
To resolve these issues many variants of hill climb algorithms have been developed. These are most commonly used:
Stochastic Hill Climbing selects at random from the uphill moves. The probability of selection varies with the steepness of the uphill move.
First-Choice Climbing implements the above one by generating successors randomly until a better one is found.
Random-restart hill climbing searches from randomly generated initial moves until the goal state is reached.
The success of hill climb algorithms depends on the architecture of the state-space landscape. Whenever there are few maxima and plateaux the variants of hill climb searching algorithms work very fine. But in real-world problems have a landscape that looks more like a widely scattered family of balding porcupines on a flat floor, with miniature porcupines living on the tip of each porcupine needle (as described in the 4th Chapter of the book Artificial Intelligence: A Modern Approach). NP-Hard problems typically have an exponential number of local maxima to get stuck on.
Given algorithms have been developed to overcome these kinds of issues:
- Stimulated Annealing
- Local Beam Search
- Genetic Algorithms
Reference Book - Artificial Intelligence: A Modern Approach
There are more alternatives available to overcome the problems of hill climbing; namely permutation groups, pattern databases and grammar based search. They are using domain-specific knowledge for search faster in the state-space.
– Manuel Rodriguez
Nov 15 at 16:40
Yes @ManuelRodriguez. Algorithms dependent on Domain-specific knowledge gives excellent results. But I tried to keep the answer to generic problems, mentioning few of the ways that can overcome limitations of Hill Climb Search.
– Ugnes
Nov 15 at 16:45
add a comment |
up vote
1
down vote
accepted
As @nbro has already said that Hill Climbing is a family of local search algorithms. So, when you said Hill Climbing in the question I have assumed you are talking about the standard hill climbing. The standard version of hill climb has some limitations and often gets stuck in the following scenario:
- Local Maxima: Hill-climbing algorithm reaching on the vicinity a local maximum value, gets drawn towards the peak and gets stuck there, having no other place to go.
- Ridges: These are sequences of local maxima, making it difficult for the algorithm to navigate.
- Plateaux: This is a flat state-space region. As there is no uphill to go, algorithm often gets lost in the plateau.
To resolve these issues many variants of hill climb algorithms have been developed. These are most commonly used:
Stochastic Hill Climbing selects at random from the uphill moves. The probability of selection varies with the steepness of the uphill move.
First-Choice Climbing implements the above one by generating successors randomly until a better one is found.
Random-restart hill climbing searches from randomly generated initial moves until the goal state is reached.
The success of hill climb algorithms depends on the architecture of the state-space landscape. Whenever there are few maxima and plateaux the variants of hill climb searching algorithms work very fine. But in real-world problems have a landscape that looks more like a widely scattered family of balding porcupines on a flat floor, with miniature porcupines living on the tip of each porcupine needle (as described in the 4th Chapter of the book Artificial Intelligence: A Modern Approach). NP-Hard problems typically have an exponential number of local maxima to get stuck on.
Given algorithms have been developed to overcome these kinds of issues:
- Stimulated Annealing
- Local Beam Search
- Genetic Algorithms
Reference Book - Artificial Intelligence: A Modern Approach
There are more alternatives available to overcome the problems of hill climbing; namely permutation groups, pattern databases and grammar based search. They are using domain-specific knowledge for search faster in the state-space.
– Manuel Rodriguez
Nov 15 at 16:40
Yes @ManuelRodriguez. Algorithms dependent on Domain-specific knowledge gives excellent results. But I tried to keep the answer to generic problems, mentioning few of the ways that can overcome limitations of Hill Climb Search.
– Ugnes
Nov 15 at 16:45
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
As @nbro has already said that Hill Climbing is a family of local search algorithms. So, when you said Hill Climbing in the question I have assumed you are talking about the standard hill climbing. The standard version of hill climb has some limitations and often gets stuck in the following scenario:
- Local Maxima: Hill-climbing algorithm reaching on the vicinity a local maximum value, gets drawn towards the peak and gets stuck there, having no other place to go.
- Ridges: These are sequences of local maxima, making it difficult for the algorithm to navigate.
- Plateaux: This is a flat state-space region. As there is no uphill to go, algorithm often gets lost in the plateau.
To resolve these issues many variants of hill climb algorithms have been developed. These are most commonly used:
Stochastic Hill Climbing selects at random from the uphill moves. The probability of selection varies with the steepness of the uphill move.
First-Choice Climbing implements the above one by generating successors randomly until a better one is found.
Random-restart hill climbing searches from randomly generated initial moves until the goal state is reached.
The success of hill climb algorithms depends on the architecture of the state-space landscape. Whenever there are few maxima and plateaux the variants of hill climb searching algorithms work very fine. But in real-world problems have a landscape that looks more like a widely scattered family of balding porcupines on a flat floor, with miniature porcupines living on the tip of each porcupine needle (as described in the 4th Chapter of the book Artificial Intelligence: A Modern Approach). NP-Hard problems typically have an exponential number of local maxima to get stuck on.
Given algorithms have been developed to overcome these kinds of issues:
- Stimulated Annealing
- Local Beam Search
- Genetic Algorithms
Reference Book - Artificial Intelligence: A Modern Approach
As @nbro has already said that Hill Climbing is a family of local search algorithms. So, when you said Hill Climbing in the question I have assumed you are talking about the standard hill climbing. The standard version of hill climb has some limitations and often gets stuck in the following scenario:
- Local Maxima: Hill-climbing algorithm reaching on the vicinity a local maximum value, gets drawn towards the peak and gets stuck there, having no other place to go.
- Ridges: These are sequences of local maxima, making it difficult for the algorithm to navigate.
- Plateaux: This is a flat state-space region. As there is no uphill to go, algorithm often gets lost in the plateau.
To resolve these issues many variants of hill climb algorithms have been developed. These are most commonly used:
Stochastic Hill Climbing selects at random from the uphill moves. The probability of selection varies with the steepness of the uphill move.
First-Choice Climbing implements the above one by generating successors randomly until a better one is found.
Random-restart hill climbing searches from randomly generated initial moves until the goal state is reached.
The success of hill climb algorithms depends on the architecture of the state-space landscape. Whenever there are few maxima and plateaux the variants of hill climb searching algorithms work very fine. But in real-world problems have a landscape that looks more like a widely scattered family of balding porcupines on a flat floor, with miniature porcupines living on the tip of each porcupine needle (as described in the 4th Chapter of the book Artificial Intelligence: A Modern Approach). NP-Hard problems typically have an exponential number of local maxima to get stuck on.
Given algorithms have been developed to overcome these kinds of issues:
- Stimulated Annealing
- Local Beam Search
- Genetic Algorithms
Reference Book - Artificial Intelligence: A Modern Approach
answered Nov 15 at 16:28
Ugnes
1,3301421
1,3301421
There are more alternatives available to overcome the problems of hill climbing; namely permutation groups, pattern databases and grammar based search. They are using domain-specific knowledge for search faster in the state-space.
– Manuel Rodriguez
Nov 15 at 16:40
Yes @ManuelRodriguez. Algorithms dependent on Domain-specific knowledge gives excellent results. But I tried to keep the answer to generic problems, mentioning few of the ways that can overcome limitations of Hill Climb Search.
– Ugnes
Nov 15 at 16:45
add a comment |
There are more alternatives available to overcome the problems of hill climbing; namely permutation groups, pattern databases and grammar based search. They are using domain-specific knowledge for search faster in the state-space.
– Manuel Rodriguez
Nov 15 at 16:40
Yes @ManuelRodriguez. Algorithms dependent on Domain-specific knowledge gives excellent results. But I tried to keep the answer to generic problems, mentioning few of the ways that can overcome limitations of Hill Climb Search.
– Ugnes
Nov 15 at 16:45
There are more alternatives available to overcome the problems of hill climbing; namely permutation groups, pattern databases and grammar based search. They are using domain-specific knowledge for search faster in the state-space.
– Manuel Rodriguez
Nov 15 at 16:40
There are more alternatives available to overcome the problems of hill climbing; namely permutation groups, pattern databases and grammar based search. They are using domain-specific knowledge for search faster in the state-space.
– Manuel Rodriguez
Nov 15 at 16:40
Yes @ManuelRodriguez. Algorithms dependent on Domain-specific knowledge gives excellent results. But I tried to keep the answer to generic problems, mentioning few of the ways that can overcome limitations of Hill Climb Search.
– Ugnes
Nov 15 at 16:45
Yes @ManuelRodriguez. Algorithms dependent on Domain-specific knowledge gives excellent results. But I tried to keep the answer to generic problems, mentioning few of the ways that can overcome limitations of Hill Climb Search.
– Ugnes
Nov 15 at 16:45
add a comment |
up vote
5
down vote
Hill climbing is not an algorithm, but a family of "local search" algorithms. Specific algorithms which fall into the category of "hill climbing" algorithms are 2-opt, 3-opt, 2.5-opt, 4-opt, or, in general, any N-opt. See chapter 3 of the paper "The Traveling Salesman Problem: A Case Study in Local Optimization" (by David S. Johnson and Lyle A. McGeoch) for more details regarding some of these local search algorithms (applied to the TSP).
What differentiates one algorithm in this category from the other is the "neighbourhood function" they use (in simple words, the way they find neighbouring solutions to a given solution). Note that, in practice, this is not always the case: often these algorithms have several different implementations.
The most evident limitation of hill climbing algorithms is due to their nature, that is, they are local search algorithms. Hence they usually just find local maxima (or minima). So, if any of these algorithms has already converged to a local minimum (or maximum) and, in the solution or search space, there is, close to this found solution, a better solution, none of these algorithms will be able to find this better solution. They will basically be trapped.
Local search algorithms are not usually used alone. They are used as sub-routines of other meta-heuristic algorithms, like simulated annealing, iterated-local search or in any of the ant-colony algorithms. So, to overcome their limitations, we usually do not use them alone, but we use them in conjunction with other algorithms, which have a probabilistic nature and can find global minima or maxima (e.g., any of the ant-colony algorithms).
Nice answer (+1)! Can you recommend a resource (YouTube, blog post, arxive paper, book) to learn about ant-colony algorithms? I've never heard about that and would like to get a rough idea of them.
– Martin Thoma
Nov 15 at 16:08
1
@MartinThoma I am afraid I really don't know of a very nice tutorial on ACS. Maybe you can start with the following brief tutorial and the corresponding implementation: cleveralgorithms.com/nature-inspired/swarm/…. If you're also interested in a more serious implementation, applied to the TSP, then have a look at this one: aco-metaheuristic.org/aco-code, implemented by Stützle (and others), one of the contributors to the development of these techniques.
– nbro
Nov 15 at 16:16
The asker knows, what the formal definition of hill climbing is because he has read the Wikipedia article. The question goes more into the direction of how to use it for Artificial Intelligence. It's known, that Hill climbing can only search in local space, which makes it difficult for AI related problems. Usually the search gets stuck in a local optima which means that the shortest route in the Traveling salesman problem can't be found.
– Manuel Rodriguez
Nov 15 at 16:16
1
@MartinThoma Anyway, you can also have a look at the research papers. I can only tell you a few important researchers: Dorigo (the first one that introduced these techniques, AFAIK), Gambardella and Stützle. Look at their papers. I am not sure which one to suggest. Also, there's a book dedicated to swarm intelligence (by Bonabeau), if you're really want to go into the details.
– nbro
Nov 15 at 16:16
add a comment |
up vote
5
down vote
Hill climbing is not an algorithm, but a family of "local search" algorithms. Specific algorithms which fall into the category of "hill climbing" algorithms are 2-opt, 3-opt, 2.5-opt, 4-opt, or, in general, any N-opt. See chapter 3 of the paper "The Traveling Salesman Problem: A Case Study in Local Optimization" (by David S. Johnson and Lyle A. McGeoch) for more details regarding some of these local search algorithms (applied to the TSP).
What differentiates one algorithm in this category from the other is the "neighbourhood function" they use (in simple words, the way they find neighbouring solutions to a given solution). Note that, in practice, this is not always the case: often these algorithms have several different implementations.
The most evident limitation of hill climbing algorithms is due to their nature, that is, they are local search algorithms. Hence they usually just find local maxima (or minima). So, if any of these algorithms has already converged to a local minimum (or maximum) and, in the solution or search space, there is, close to this found solution, a better solution, none of these algorithms will be able to find this better solution. They will basically be trapped.
Local search algorithms are not usually used alone. They are used as sub-routines of other meta-heuristic algorithms, like simulated annealing, iterated-local search or in any of the ant-colony algorithms. So, to overcome their limitations, we usually do not use them alone, but we use them in conjunction with other algorithms, which have a probabilistic nature and can find global minima or maxima (e.g., any of the ant-colony algorithms).
Nice answer (+1)! Can you recommend a resource (YouTube, blog post, arxive paper, book) to learn about ant-colony algorithms? I've never heard about that and would like to get a rough idea of them.
– Martin Thoma
Nov 15 at 16:08
1
@MartinThoma I am afraid I really don't know of a very nice tutorial on ACS. Maybe you can start with the following brief tutorial and the corresponding implementation: cleveralgorithms.com/nature-inspired/swarm/…. If you're also interested in a more serious implementation, applied to the TSP, then have a look at this one: aco-metaheuristic.org/aco-code, implemented by Stützle (and others), one of the contributors to the development of these techniques.
– nbro
Nov 15 at 16:16
The asker knows, what the formal definition of hill climbing is because he has read the Wikipedia article. The question goes more into the direction of how to use it for Artificial Intelligence. It's known, that Hill climbing can only search in local space, which makes it difficult for AI related problems. Usually the search gets stuck in a local optima which means that the shortest route in the Traveling salesman problem can't be found.
– Manuel Rodriguez
Nov 15 at 16:16
1
@MartinThoma Anyway, you can also have a look at the research papers. I can only tell you a few important researchers: Dorigo (the first one that introduced these techniques, AFAIK), Gambardella and Stützle. Look at their papers. I am not sure which one to suggest. Also, there's a book dedicated to swarm intelligence (by Bonabeau), if you're really want to go into the details.
– nbro
Nov 15 at 16:16
add a comment |
up vote
5
down vote
up vote
5
down vote
Hill climbing is not an algorithm, but a family of "local search" algorithms. Specific algorithms which fall into the category of "hill climbing" algorithms are 2-opt, 3-opt, 2.5-opt, 4-opt, or, in general, any N-opt. See chapter 3 of the paper "The Traveling Salesman Problem: A Case Study in Local Optimization" (by David S. Johnson and Lyle A. McGeoch) for more details regarding some of these local search algorithms (applied to the TSP).
What differentiates one algorithm in this category from the other is the "neighbourhood function" they use (in simple words, the way they find neighbouring solutions to a given solution). Note that, in practice, this is not always the case: often these algorithms have several different implementations.
The most evident limitation of hill climbing algorithms is due to their nature, that is, they are local search algorithms. Hence they usually just find local maxima (or minima). So, if any of these algorithms has already converged to a local minimum (or maximum) and, in the solution or search space, there is, close to this found solution, a better solution, none of these algorithms will be able to find this better solution. They will basically be trapped.
Local search algorithms are not usually used alone. They are used as sub-routines of other meta-heuristic algorithms, like simulated annealing, iterated-local search or in any of the ant-colony algorithms. So, to overcome their limitations, we usually do not use them alone, but we use them in conjunction with other algorithms, which have a probabilistic nature and can find global minima or maxima (e.g., any of the ant-colony algorithms).
Hill climbing is not an algorithm, but a family of "local search" algorithms. Specific algorithms which fall into the category of "hill climbing" algorithms are 2-opt, 3-opt, 2.5-opt, 4-opt, or, in general, any N-opt. See chapter 3 of the paper "The Traveling Salesman Problem: A Case Study in Local Optimization" (by David S. Johnson and Lyle A. McGeoch) for more details regarding some of these local search algorithms (applied to the TSP).
What differentiates one algorithm in this category from the other is the "neighbourhood function" they use (in simple words, the way they find neighbouring solutions to a given solution). Note that, in practice, this is not always the case: often these algorithms have several different implementations.
The most evident limitation of hill climbing algorithms is due to their nature, that is, they are local search algorithms. Hence they usually just find local maxima (or minima). So, if any of these algorithms has already converged to a local minimum (or maximum) and, in the solution or search space, there is, close to this found solution, a better solution, none of these algorithms will be able to find this better solution. They will basically be trapped.
Local search algorithms are not usually used alone. They are used as sub-routines of other meta-heuristic algorithms, like simulated annealing, iterated-local search or in any of the ant-colony algorithms. So, to overcome their limitations, we usually do not use them alone, but we use them in conjunction with other algorithms, which have a probabilistic nature and can find global minima or maxima (e.g., any of the ant-colony algorithms).
edited Nov 15 at 15:39
answered Nov 15 at 15:33
nbro
768415
768415
Nice answer (+1)! Can you recommend a resource (YouTube, blog post, arxive paper, book) to learn about ant-colony algorithms? I've never heard about that and would like to get a rough idea of them.
– Martin Thoma
Nov 15 at 16:08
1
@MartinThoma I am afraid I really don't know of a very nice tutorial on ACS. Maybe you can start with the following brief tutorial and the corresponding implementation: cleveralgorithms.com/nature-inspired/swarm/…. If you're also interested in a more serious implementation, applied to the TSP, then have a look at this one: aco-metaheuristic.org/aco-code, implemented by Stützle (and others), one of the contributors to the development of these techniques.
– nbro
Nov 15 at 16:16
The asker knows, what the formal definition of hill climbing is because he has read the Wikipedia article. The question goes more into the direction of how to use it for Artificial Intelligence. It's known, that Hill climbing can only search in local space, which makes it difficult for AI related problems. Usually the search gets stuck in a local optima which means that the shortest route in the Traveling salesman problem can't be found.
– Manuel Rodriguez
Nov 15 at 16:16
1
@MartinThoma Anyway, you can also have a look at the research papers. I can only tell you a few important researchers: Dorigo (the first one that introduced these techniques, AFAIK), Gambardella and Stützle. Look at their papers. I am not sure which one to suggest. Also, there's a book dedicated to swarm intelligence (by Bonabeau), if you're really want to go into the details.
– nbro
Nov 15 at 16:16
add a comment |
Nice answer (+1)! Can you recommend a resource (YouTube, blog post, arxive paper, book) to learn about ant-colony algorithms? I've never heard about that and would like to get a rough idea of them.
– Martin Thoma
Nov 15 at 16:08
1
@MartinThoma I am afraid I really don't know of a very nice tutorial on ACS. Maybe you can start with the following brief tutorial and the corresponding implementation: cleveralgorithms.com/nature-inspired/swarm/…. If you're also interested in a more serious implementation, applied to the TSP, then have a look at this one: aco-metaheuristic.org/aco-code, implemented by Stützle (and others), one of the contributors to the development of these techniques.
– nbro
Nov 15 at 16:16
The asker knows, what the formal definition of hill climbing is because he has read the Wikipedia article. The question goes more into the direction of how to use it for Artificial Intelligence. It's known, that Hill climbing can only search in local space, which makes it difficult for AI related problems. Usually the search gets stuck in a local optima which means that the shortest route in the Traveling salesman problem can't be found.
– Manuel Rodriguez
Nov 15 at 16:16
1
@MartinThoma Anyway, you can also have a look at the research papers. I can only tell you a few important researchers: Dorigo (the first one that introduced these techniques, AFAIK), Gambardella and Stützle. Look at their papers. I am not sure which one to suggest. Also, there's a book dedicated to swarm intelligence (by Bonabeau), if you're really want to go into the details.
– nbro
Nov 15 at 16:16
Nice answer (+1)! Can you recommend a resource (YouTube, blog post, arxive paper, book) to learn about ant-colony algorithms? I've never heard about that and would like to get a rough idea of them.
– Martin Thoma
Nov 15 at 16:08
Nice answer (+1)! Can you recommend a resource (YouTube, blog post, arxive paper, book) to learn about ant-colony algorithms? I've never heard about that and would like to get a rough idea of them.
– Martin Thoma
Nov 15 at 16:08
1
1
@MartinThoma I am afraid I really don't know of a very nice tutorial on ACS. Maybe you can start with the following brief tutorial and the corresponding implementation: cleveralgorithms.com/nature-inspired/swarm/…. If you're also interested in a more serious implementation, applied to the TSP, then have a look at this one: aco-metaheuristic.org/aco-code, implemented by Stützle (and others), one of the contributors to the development of these techniques.
– nbro
Nov 15 at 16:16
@MartinThoma I am afraid I really don't know of a very nice tutorial on ACS. Maybe you can start with the following brief tutorial and the corresponding implementation: cleveralgorithms.com/nature-inspired/swarm/…. If you're also interested in a more serious implementation, applied to the TSP, then have a look at this one: aco-metaheuristic.org/aco-code, implemented by Stützle (and others), one of the contributors to the development of these techniques.
– nbro
Nov 15 at 16:16
The asker knows, what the formal definition of hill climbing is because he has read the Wikipedia article. The question goes more into the direction of how to use it for Artificial Intelligence. It's known, that Hill climbing can only search in local space, which makes it difficult for AI related problems. Usually the search gets stuck in a local optima which means that the shortest route in the Traveling salesman problem can't be found.
– Manuel Rodriguez
Nov 15 at 16:16
The asker knows, what the formal definition of hill climbing is because he has read the Wikipedia article. The question goes more into the direction of how to use it for Artificial Intelligence. It's known, that Hill climbing can only search in local space, which makes it difficult for AI related problems. Usually the search gets stuck in a local optima which means that the shortest route in the Traveling salesman problem can't be found.
– Manuel Rodriguez
Nov 15 at 16:16
1
1
@MartinThoma Anyway, you can also have a look at the research papers. I can only tell you a few important researchers: Dorigo (the first one that introduced these techniques, AFAIK), Gambardella and Stützle. Look at their papers. I am not sure which one to suggest. Also, there's a book dedicated to swarm intelligence (by Bonabeau), if you're really want to go into the details.
– nbro
Nov 15 at 16:16
@MartinThoma Anyway, you can also have a look at the research papers. I can only tell you a few important researchers: Dorigo (the first one that introduced these techniques, AFAIK), Gambardella and Stützle. Look at their papers. I am not sure which one to suggest. Also, there's a book dedicated to swarm intelligence (by Bonabeau), if you're really want to go into the details.
– nbro
Nov 15 at 16:16
add a comment |
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f8986%2fwhat-are-the-limitations-of-the-hill-climbing-algorithm-and-how-to-overcome-them%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown