Buscar
Estás en modo de exploración. debe iniciar sesión para usar MEMORY

   Inicia sesión para empezar

Intro to AI 1


🇬🇧
In Inglés
Creado:


Public
Creado por:
Christian N


4.5 / 5  (1 calificaciones)



» To start learning, click login

1 / 25

[Front]


What is does Heuristic search mean?
[Back]


"Promising" search, gut feeling.

Practique preguntas conocidas

Manténgase al día con sus preguntas pendientes

Completa 5 preguntas para habilitar la práctica

Exámenes

Examen: pon a prueba tus habilidades

Pon a prueba tus habilidades en el modo de examen

Aprenda nuevas preguntas

Modos dinámicos

InteligenteMezcla inteligente de todos los modos
PersonalizadoUtilice la configuración para ponderar los modos dinámicos

Modo manual [beta]

Seleccione sus propios tipos de preguntas y respuestas
Modos específicos

Aprende con fichas
Completa la oración
Escuchar y deletrearOrtografía: escribe lo que escuchas
elección múltipleModo de elección múltiple
Expresión oralResponde con voz
Expresión oral y comprensión auditivaPractica la pronunciación
EscrituraModo de solo escritura

Intro to AI 1 - Marcador

0 usuarios han completado este curso. ¡sé el primero!

Ningún usuario ha jugado este curso todavía, sé el primero


Intro to AI 1 - Detalles

Niveles:

Preguntas:

72 preguntas
🇬🇧🇬🇧
What is does Heuristic search mean?
"Promising" search, gut feeling.
What is Deep Learning?
Deep learning is training on a deep neural network.
What is a Deep Neural Network?
It is a neural net with many hidden layers of neurons.
What are Agents?
They are for example humans, robots, softbots etc.
What are Agent functions?
A agent function maps from percept histories to actions f: P* -> A The agent function runs on physical architechture to produce f.
What is a Task Environment (TE)?
TE is specified by: P (Performance measure) E (Environment) A (Actuators) S (Sensors) Example TE-description for automated taxi: P: safe, fast, legal, comfortable trip, maximize profits, etc. E: roads, other traffic, pedestrians, customers, physics, etc. A: steering, accelerator, brake, horn, lights, etc. S: cameras, sonar, speedometer, GPS, etc.
What is a rational agent?
Rational agent: perceives E through S and acts upon E through A in a way that each time an action is selected that is expected to maximize P
Fully observable vs partially observable environments
An environment is fully observable when the sensors can detect all aspects that are relevant to the choice of action.
Deterministic vs Stochastic (Non-deterministic) environments
If the next environment state is completely determined by the current state and the executed action, then the environment is deterministic. Uncertainty introduced by other agents is ignored. The result of actions is characterized by a set of possible outcomes (indeterministic actions, e.g. toss a coin).
Episodic (vs. sequential) environments
In an episodic environment the agent’s experience can be divided into atomic steps. The next episode does not depend on the actions taken in previous episodes.
Static (vs. dynamic) environments
If the environment can change while the agent is choosing an action, the environment is dynamic.
Discrete (vs. continous) environments
This distinction can be applied to the state of the environment, the way time is handled and to the percepts/actions of the agent.
Single-agent (vs. multi-agent) environments
The environment does not contain other agents who are also maximizing some performance measure that depends on the current agent’s actions. Otherwise the environment is multi-agent.
What is a known environment?
The environment is known if it is clear to the agent in any state which state (or: states, in case of non-determinism) result from a chosen action
What is search? (Solving problems)
The process of looking for a sequence of actions that reaches a goal.
What is the design of the agent?
Formulate, search, execute
What is the basic idea of tree search algorithms?
1. Given a (weighted) graph problem representing an abstraction of the real-world problem and a search strategy 2. Starting from the initial node 3. Simulate the exploration of the graph by iterating the following steps: Select a node for expansion according to the strategy Return the solution path, if the selected node is the goal one Generate successors of selected node (a.k.a. expand states) 4. Return failure, if all nodes were checked and no solution found
What are the dangers of not detecting repeated states?
Failure to detect repeated states can turn a linear problem into an exponential one!
Can a node be generated once it is expanded?
No. A node is generated only if it was not expanded yet.
What are some examples of search strategies?
Completeness does it always find a solution if one exists? Time complexity number of nodes generated/expanded Space complexity maximum number of nodes in memory Optimality does it always find a least-cost solution? Time and space complexity are measured in terms of ? maximum branching factor of the search tree ? depth of the least-cost solution ? maximum depth of the state space (can be ∞)
What are uniformed strategies?
Uninformed strategies use only the information available in the problem definition
What are some Uniformed search strategies?
Breadth-first search Uniform-first search Depth-first search Depth-limited search Iterative-deepening search
Breadth-first search definition?
Expand shallowest unexpanded node fringe is a first in, first out queue (FIFO)
What is a fringe?
"fringe" refers to a data structure that stores all the nodes or states that have been discovered but not yet explored.
What is the problem with BFS?
Space is the problem.
Uniform-cost search definition?
Expand least-cost unexpanded node fringe is a queue ordered by path cost, lowest first
Are Breadth-first search and Uniform-cost search equal?
They are equivalent if step costs all equal.
Depth-first search definition?
Expand deepest unexpanded node fringe = LIFO queue
What are properties of DFS?
Strongly depends whether graph or tree-search is applied Graph-search Avoids repeated states and redundant paths, complete in finite state spaces, but may store every node Tree-search Not complete even in finite state spaces (loops) Modified tree-search Checks new state against those on the path from current node to the root, complete in finite state spaces
Depth-limited search definition?
Equals to depth-first search with depth limit ? i.e. nodes at depth ? have no successors
Iterative-deepening search definition?
Increases the depth limit to ensure completeness while still being memory-efficient.
Summary
1. Graph-search can be exponentially more efficient than tree search 2. Iterative-deepening search uses only linear space and not much more time than other uninformed algorithms 3. Problem formulation usually requires abstracting away real-world details to define a state space that can be explored
What is the difference between graph search and tree search?
The only difference between tree search and graph search is that tree search does not need to store the explored set, because we are guaranteed never to attempt to visit the same state twice.
Greedy search
Evaluation function h(n) (heuristic) is an estimate of cost from n to the closest goal Greedy search expands the node that appears to be closest to goal
Iterative improvement algorithms
In many optimization problems, path is irrelevant; the goal state itself is the solution
Types of Informed search
Best first search - Greedy search - Beam search - A* search Local search algorithms - Hill-climbing - Simulated Annealing - Local Beam Search
Greedy search
Evaluation function h(n) (heuristic) is an estimate of cost from n to the closest goal Greedy search expands the node that appears to be closest to goal
Properties of Greedy search
Complete in finite space with repeated-state checking Time complexity: O(b^m), but a good heuristic can give dramatic improvement Space complexity: O(b^m) keeps all nodes in memory Optimal? No. *where b is the maximum branching factor of the tree and m is the maximum depth of the state space
Beam search
Problem: best-first greedy search might get stuck Solution: keep track of k best alternative solutions Algorithm summary: Execute greedy best-first search with the following extension 1 Sort the fringe and keep only the best k nodes 2 Remove all nodes from the fringe and expand the best k nodes 3 If the goal is found – quit; otherwise 4 Add resulting nodes to the fringe, depending on the search variant used (tree, graph, etc.) and continue from the first step
A*search
Idea: avoid expanding paths that are already expensive Evaluation function f (n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost to goal from n f (n) = estimated total cost of path through n to goal
Conditions for optimality I
An admissible heuristic never overestimates the costs to reach the goal, i.e. is optimistic For every node n: h(n) ≤ h*(n) where h*(n) is the true cost for n. Also require h(n) ≥ 0, so h(G) = 0 for any goal G
Conditions for optimality II
A consistent heuristic satisfies the triangle inequality relation
Optimality of A*
Theorem:The tree-search version of A* is optimal if h(n) is admissibleThe graph-search version of A* is optimal if h(n) is consistent
Proving optimality of A* employing graph-search I
Step 1 : If h(n) is consistent, then the values of f (n) along any path are nondecreasing h(n) ≤ c(n, a, n′) + h(n′) If h is consistent and n′ is a successor of n, we have f (n′) = g(n′) + h(n′) = g(n) + c(n, a, n′) + h(n′) ≥ g(n) + h(n) = f (n)
Proving optimality of A* employing graph-search II
2. Step: Whenever A* selects a node n for expansion, the optimal path to that node has been foundIt follows that f (n) ≤ f (n′) because n was selected for expansionBecause costs are nondecreasing, the path through n′ cannot have less costs then f (n). Contradiction
Proving optimality for tree-search
Suppose some suboptimal goal G2 has been generated and is in the queue2 . Let n be an unexpanded node on a shortest path to an optimal goal G. f (G2) = g(G2) + h(G2) = g(G2) since h(G2) = 0 f (G2) > g(G) = g(n) + h*(n) since G2 is suboptimal f (G2) > g(n) + h(n) = f (n) since h(n) ≤ h*(n) (admissible) Since f (G2) > f (n), A* will never select G2 for expansion
Which nodes are expanded?
If C*is the cost of the optimal solution path then: A*expands all nodes with f (n) < C* A* might expand some nodes with f (n) = C* A*expands no nodes with f (n) > C*
Properties of A*
Complete? Yes. unless there are infinitely many nodes with f ≤ f (G)Time Exponential O((b^?)^d), where ? := (h* − h)/h*and h*is the cost of an optimal solution; ? is the relative errorSpace Exponential, graph-search keeps all nodes in memory, even the size of the fringe can be exponentialOptimal? Yes, depending on the properties of the heuristic andthe search method (see above)A* usually runs out of space long before it runs out of time
Heuristics: Euclidian distance
Euclidian distance: straight line distance On a plane dS (A, B) = √︀ (xA − xB ) 2 + (yA − yB ) 2
Heuristics: Manhattan distance
Manhattan distance: number of valid moves required to place a tile in a correct position On a plane dM(A, B) = |xA − xB | + |yA − yB |
Admissible heuristics
Example: the 8-puzzle h1(n) = number of misplaced tiles h2(n) = total Manhattan distance (i.e. no. of squares from desired location of each tile)
Dominance
If h2(n) ≥ h1(n) for all n (both admissible) then h2 dominates h1 and is better for search
Relaxed problems
Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution Key points: The optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem Relaxed problem can be solved efficiently
Memory-bounded Heuristic Search
Task: Solve A* space problems, but maintain completeness and optimality Iterative-deepening A* (IDA*) Here cutoff information is the f-cost (g+h) instead of depth Cutoff is smallest f-cost of any node that exceeded cutoff on previous iteration Recursive best-first search (RBFS) Recursive algorithm that attempts to mimic standard best-first search with linear space. (simple) Memory-bounded A* ((S)MA*) Drop the worst-leaf node when memory is full Back up if necessary
RBFS Evaluation
RBFS is a bit more efficient than IDA* Like A*, optimal if h(n) is admissible Space complexity is O(bd) Time complexity difficult to characterize, worst case O(b^2d)Depends on accuracy of h(n) and how often best path changes IDA* and RBFS suffer from too little memory. IDA* retains only one single number (the current f-cost limit)
(Simplified) Memory-bounded A
Use all available memory Expand best leafs until available memory is full When full, SMA* drops worst leaf node (highest f-value) Like RBFS backs up f-value of forgotten node to its parent Same node could be selected for expansion and deletion. SMA* solves this by expanding newest best leaf and deleting oldest worst leaf. SMA* is complete if solution is reachable, i.e. path must fit in memory Optimal if heuristic h(n) is admissible
Iterative improvement algorithms
In many optimization problems, path is irrelevant; the goal state itself is the solution
Hill-climbing Variations
Random-restart hill-climbing Tries to avoid getting stuck in local maxima Stochastic hill-climbing Random selection among the uphill moves The selection probability can vary with the steepness of the uphill move First-choice hill-climbing Stochastic hill climbing by generating successors randomly until a better one compared to current state is found Good strategy if there are many successors
Simulated Annealing
Escape local maxima by allowing “bad” moves But gradually decrease their size and frequency Origin: adaptation of the Metropolis-Hastings algorithm
Local Beam Search
Keep track of k states instead of one Initially: k random states Next: determine all successors of k states If any of successors is goal, then finished Else select k best from successors and repeat Major difference with k random-restart search - Information is shared among k search threads Can suffer from lack of diversity Stochastic variant: choose k successors at random, with the probability of choosing a given successor being an increasing function of its value
Genetic Algorithms
Variant of local beam search in which a successor state is generated by combining two parent states Start population – k randomly generated states A state is represented by a chromosome – a string over a finite alphabet (often a string of 0s and 1s), e.g. positions of queens in 8-queens Evaluation function (fitness function): higher values for better states, e.g. number of non-attacking pairs of queens Produce the next generation of states by crossover (recombination) and mutation
Crossover and mutation
Crossover Select a pair of chromosomes for crossover Generate children according to a crossover strategy: cut point(s) are defined and chromosomes are swapped Mutation Idea is similar to a random jump in simulated annealing Use randomness to maintain genetic diversity Change a randomly selected bit of a chromosome to a random value to avoid local extremes
Online Search
Until now all algorithms were offline Offline – solution is determined before executing it Online – interleaving computation and action: observe, compute, take action, observe, . Online search is necessary for dynamic or stochastic environments It is impossible to take into account all possible contingencies Used for exploration problems: Unknown states and/or actions, e.g. any robot in a new environment
Summary
Heuristic functions estimate costs of shortest paths Good heuristics can dramatically reduce search cost Greedy best-first search expands lowest h Incomplete and not always optimal A*search expands lowest g + h Complete and optimal Admissible heuristics can be derived from exact solution of relaxed problems Local search algorithms can be used to find a reasonably good local optimum after a small number of restarts
Constraints programming is usually done in two steps:
1) Creation of an conceptual model, an abstraction of some real-world problem. 2) Design of a program that solves the problem
MiniZinc
MiniZinc is a modeling language being developed mostly by NICTA (Australia) MiniZinc ⊂ Zinc – a more powerful modeling language
Running MiniZinc
MiniZinc models must have mzn extension and data dzn
A MiniZinc model consists of a sequence of items
An inclusion item include <filename (string)>; An output item output <list of string expressions>; Declaration of a variable A constraint constraint <Boolean expression>; A solve item (only one of the following is allowed) - solve satisfy; - solve maximize <arith. expression>; - solve minimize <arith. expression>; predicates and asserts (for checking parameter values) Search annotation items ann
Writing Efficient Models I
Add search annotations to the solve item to control exploration of the search space first_fail (MRV): choose the variable with the smallest domain size indomain_min: assign the variable its smallest domain value See reference manual for a complete list of heuristics
Writing Efficient Models II
Use global constraints such as alldifferent since they have better propagation behavior Try different models for the problem Add redundant constraints Bound variables as tightly as possible (avoid var int) Avoid introducing unnecessary variables
Summary
MiniZinc is a language designed for specification of decision and optimisation problems The model is declarative, although it can contain annotations The language provides large number of operators and built-in functions to simplify modelling Advanced models in MiniZinc use predicates to define complex subproblem constraints Global constraints (better solving) User defined constraints (better readability) Efficiency depends on the model formulation Developing an efficient decision model requires sometimes considerable experimentation (NP-hard problems are hard problems)