Type something. Hill climbing is a widely used optimization algorithm in Artificial Intelligence (AI) Optimization Hill Climbing What is Hill Climbing. Visit - https://www.geeksforgeeks.org/introduction-hill-climbing-artificial-intelligence/ Initialize the current route and calculate its distance. Iterate to find the best neighboring route until no improvement is found. Return the best route and its distance. # Distance function def calculate_distance(route): route_extended = np.append(route, [route[0]], axis=0) # Return to the starting point return np.sum(np.sqrt(np.sum(np.diff(route_extended, axis=0)**2, axis=1))) # Function to create a random initial route def create_initial_route(cities): return np.array(random.sample(list(cities), len(cities))) is lower, we're going to be less likely to move to that neighboring state as well.. So now this is the big picture for simulated annealing, this process of taking the problem and going ahead and generating random neighbors. we'll always move to a neighbor if it's better than our current state. But even if the neighbor is worse than our current state, we'll sometimes move there depending on how much worse it is and also based on the temperature. and as a result, the hope, the goal of this whole process is that as we begin to try and find our way to the local,--the global maximum or the global minimum,, we can dislodge ourselves if we ever get stuck at a local maximum or a local minimum in order to eventually make our way to exploring the part of the state space, that is going to be the best.. And then as the temperature decreases,, eventually we settle there without moving around too much from what we've found to be the globally best thing that we can do thus far.., so at the very end,, we just return whatever the current state happens to be.., And that is the conclusion of this algorithm. and we've been able to figure out what the solution is.. And these types of algorithms have a lot of different applications.. anytime you can take a problem and formulate it as something where you can explore a particular configuration and then ask,, are any of the neighbors better than this current configuration, and have some way of measuring that?, then there is an applicable case for these hill-climbing, simulated-annealing types of algorithms. So sometimes it can be for facility, location-type problems, like for when you're trying to plan a city and figure out where the hospitals should be.. But there are definitely other applications as well.., And one of the most famous problems in computer science is the traveling salesman problem.. traveling salesman problem generally is formulated like this.. I have a whole bunch of cities here indicated by these dots.. and what I'd like to do is find some route that takes me through all of the cities and ends up back where I started, so some route that starts here, goes through all these cities, and ends up back, where I originally started.., And what I might like to do is minimize the total distance that I have to travel in order to--, or the total cost of taking this entire path.., And you can imagine this is a problem that's very applicable in situations, like when delivery companies are trying to deliver things to a whole bunch of different houses., they want to figure out,, how do I get from the warehouse to all these various different houses and get back again, All using as minimal time, and distance, and energy as possible.? So you might want to try to solve these sorts of problems., but it turns out that solving this particular kind of problem is very computationally difficult, and is a very computationally expensive task to be able to figure it out.., And this falls under the category of what are known as "NP-complete problems," problems that there is no known efficient way to try and solve these sorts of problems.. And so what we ultimately have to do is come up with some approximation, some ways of trying to find a good solution, even if we're not going to find the globally best solution that we possibly can, at least not in a feasible or tractable amount of time.. And so what we could do is take the traveling salesman problem, and try to formulate it using local search, and ask a question like,, all right,, I can pick some state, some configuration,, some route between all of these nodes.., and I can measure the cost of that state, figure out what the distance is.., And I might now want to try to minimize that cost as much as possible.. And then the only question now is,, what does it mean to have a neighbor of this state?, what does it mean to take this particular route and have some neighboring route that is close to it, but slightly different in such that it might have a different total distance? And there are a number of different definitions for what a neighbor of a traveling salesman configuration might look like.., But one way is just to say, a neighbor is what happens if we pick two of these edges between nodes and switch them, effectively. so for example,, I might pick these two edges here,, these two that just happen across,--this node goes here. this node goes there--and go ahead and switch them.., And what that process will generally look like is removing both of these edges from the graph, taking this node, and connecting it to the node it wasn't connected to, so connecting it up here instead.. we'll need to take these arrows that were originally going this way and reverse them, so move them going the other way, and then just fill in that last remaining blank, add an arrow that goes in that direction instead.. so by taking two edges and just switching them, I have been able to consider one possible neighbor of this particular configuration.., and it looks like this neighbor is actually better.. it looks like this probably travels a shorter distance in order to get through all the cities through this route than the current state did.., And so you could imagine implementing this idea inside of a hill-climbing or simulated-annealing algorithm, where we repeat this process to try and take a state of this traveling salesman problem, look at all of the neighbors, and then move to the neighbors if they're better, or maybe even move to the neighbors, if they're worse until we eventually settle upon some best solution that we've been able to find.. and it turns out that these types of approximation algorithms,, even if they don't always find the very best solution,, can often do pretty well at trying to find solutions that are helpful too.. So that then was a look at local search, a particular category of algorithms that can be used for solving a particular type of problem where we don't really care about the path to the solution.. I didn't care about the steps I took to decide where the hospitals should go. I just cared about the solution itself.. I just care about where the hospitals should be or what the route through the traveling salesman journey really ought to be.. Another type of algorithm that might come up are known as these categories of linear-programming types of problems.. and linear programming often comes up in the context where we're trying to optimize for some mathematical function. but oftentimes, linear programming will come up when we might have real real numbered values so that it's not just like discrete, fixed values that we might have, but any decimal values that we might want to be able to calculate.. And so linear programming is a family of types of problems where we might have a situation that looks like this, where the goal of linear programming is to minimize a cost function. and you can invert the numbers and, say, try and maximize it, but often we'll frame it as trying to minimize the cost function that has some number of variables,, x1, x2, x3, all the way up to XN, just some number of variables that are involved, things that I want to know the values to. and this cost function might have coefficients in front of those variables.. And this is what we would call a "linear equation," where we just have all of these variables that might be multiplied by a coefficient and then added together.. we're not going to square anything or cube anything, because that'll give us different types of equations.. with linear programming,, we're just dealing with linear equations in addition to linear constraints where a constraint is going to look something like, if we sum up this particular equation, that is just some linear combination of all of these variables,, it is less than or equal to some bound b.. And we might have a whole number of these various different constraints that we might place onto our linear programming exercise.. And likewise,, just as we can have constraints that are saying this linear equation is less than or equal to some bound B,, it might also be equal to something. But if you want some sum of some combination of variables, to be equal to a value,, you can specify that.. And we can also maybe specify that each variable has lower and upper balance, that it needs to be a positive number, for example,, or it needs to be a number that is less than 50, for example,. And there are a number of other choices that we can make there for defining what the bounds of a variable are., But it turns out that if you can take a problem and formulate it in these terms,, formulate the problem, as your goal is to minimize a cost function, and you're minimizing that cost function subject to particular constraints., subjects to equations that are of the form like this,, of some sequence of variables, is less than a bound, or is equal to some particular value., then there are a number of algorithms that already exist for solving these sorts of problems. so let's go ahead and take a look at an example. here's an example of a problem that might come up in the world of linear programming. often, this is going to come up when we're trying to optimize for something, and we want to be able to do some calculations, and we have constraints on what we're trying to optimize. and so it might be something like this. in the context of a factory, we have 2 machines, x1 and x2. x1 costs $50 an hour to run. x2 costs $80 an hour to run. and our goal, what we're trying to do, our objective is to minimize the total cost. so that's what we'd like to do. but we need to do so subject to certain constraints. so there might be a labor constraint that x1 requires 5 units of labor per hour. X2 requires 2 units of labor per hour, and we have a total of 20 units of labor that we have to spend. so this is a constraint. we have no more than 20 units of labor that we can spend, and we have to [inaudible] spend it across X1 and X2, each of which requires a different amount of labor. and we might also have a constraint like this that tells us X1 is going to produce 10 units of output per hour. X2 is going to produce 12 units of output per hour. and the company needs 90 units of output. so we have some goal, something we need to achieve. we need to achieve 90 units of output, but there are some constraints that X1 can only produce 10 units of output per hour. X2 produces 12 units of output per hour. these types of problems come up quite frequently. and you can start to notice patterns in these types of problems, problems where I am trying to optimize for some goal, minimizing cost, maximizing output, maximizing profits, or something like that. and there are constraints that are placed on that process. and so now we just need to formulate this problem in terms of linear equations. and so let's start with this first point. two machines, X1 and X2, X costs $50 an hour. X2 costs $80 an hour. here we can come up with an objective function that might look like this. this is our cost function, rather--50 times X1 plus 80 times X2 where X1 is going to be a variable representing how many hours we run machine X1 for. X2 is going to be a variable representing how many hours are we running machine X2 for. And what we're trying to minimize is this cost function, which is just how much it costs to run each of these machines per hour summed up. this is an example of a linear equation, just some combination of these variables plus coefficients that are placed in front of them. and I would like to minimize that total value, but I need to do so subject to these constraints.--X1 requires 50 units of labor per hour, X2 requires two., and we have a total of 20 units of labor to spend.. And so that gives us a constraint of this form.--5 times X1 plus 2 times X2 is less than or equal to 20. 20 is the total number of units of labor we have to spend.. and that's spent across X1 and X2, each of which requires a different number of units of labor per hour., for example. and finally, we have this constraint here. x1 produces 10 units of output per hour, and x2 produces 12, and we need 90 units of output. and so this might look something like this, that 10x 1 plus 12x 2, this is amount of output per hour. it needs to be at least 90. if we can do better, great, but it needs to be at least 90. and if you recall from my formulation before, I said that generally speaking in linear programming, we deal with equals constraints or less-than or equal-to constraints. so we have a greater-than or equal-to sign here. that's not a problem. whenever we have a greater-than or equal-to sign, we can just multiply the equation by negative 1, and that will flip it around to a less than or equals negative 90, for example, instead of a greater than or equal to 90. and that's going to be an equivalent expression that we can use to represent this problem.. So now that we have this cost function and these constraints that it's subject to, it turns out there are a number of algorithms that can be used in order to solve these types of problems.. and these problems go a little bit more into geometry and linear algebra than we're really going to get into.. but the most com--popular of these types of algorithms are Simplex, which was one of the first algorithms discovered for trying to solve linear programs. and later on, a class of interior-point algorithms can be used to solve this type of problem as well.. the key is not to understand exactly how these algorithms work but to realize that these algorithms exist for efficiently finding solutions anytime we have a problem of this particular form. and so we can take a look, for example, at the production directory here where here we have a file called production.py where here I'm using scipy, which is just a library for a lot of science-related functions within Python. and I can go ahead and just run this optimization function in order to run a linear program.. .linprog here is going to try and solve this linear program for me where I provide to this expression, to this function call all of the data about my linear program.. so it needs to be in a particular format, which might be a little confusing at first, But this first athasjdk asdjaslkdjasl asdasdad Hill Climbing is a heuristic search algorithm. It is used primarily for mathematical optimization problems in artificial intelligence asdjaksdl The Traveling Salesman Problem (TSP) is a well-known combinatorial optimization problem that has been extensively studied in operations research and computer science.