Buscar
Estás en modo de exploración. debe iniciar sesión para usar MEMORY

   Inicia sesión para empezar

level: Lecture 2: Uniformed Search

Questions and Answers List

level questions: Lecture 2: Uniformed Search

QuestionAnswer
What are Agents?They are for example humans, robots, softbots etc.
What are Agent functions?A agent function maps from percept histories to actions f: P* -> A The agent function runs on physical architechture to produce f.
What is a Task Environment (TE)?TE is specified by: P (Performance measure) E (Environment) A (Actuators) S (Sensors) Example TE-description for automated taxi: P: safe, fast, legal, comfortable trip, maximize profits, etc. E: roads, other traffic, pedestrians, customers, physics, etc. A: steering, accelerator, brake, horn, lights, etc. S: cameras, sonar, speedometer, GPS, etc.
What is a rational agent?Rational agent: perceives E through S and acts upon E through A in a way that each time an action is selected that is expected to maximize P
Fully observable vs partially observable environmentsAn environment is fully observable when the sensors can detect all aspects that are relevant to the choice of action.
Deterministic vs Stochastic (Non-deterministic) environmentsIf the next environment state is completely determined by the current state and the executed action, then the environment is deterministic. Uncertainty introduced by other agents is ignored. The result of actions is characterized by a set of possible outcomes (indeterministic actions, e.g. toss a coin).
Episodic (vs. sequential) environmentsIn an episodic environment the agent’s experience can be divided into atomic steps. The next episode does not depend on the actions taken in previous episodes.
Static (vs. dynamic) environmentsIf the environment can change while the agent is choosing an action, the environment is dynamic.
Discrete (vs. continous) environmentsThis distinction can be applied to the state of the environment, the way time is handled and to the percepts/actions of the agent.
Single-agent (vs. multi-agent) environmentsThe environment does not contain other agents who are also maximizing some performance measure that depends on the current agent’s actions. Otherwise the environment is multi-agent.
What is a known environment?The environment is known if it is clear to the agent in any state which state (or: states, in case of non-determinism) result from a chosen action
What is search? (Solving problems)The process of looking for a sequence of actions that reaches a goal.
What is the design of the agent?formulate, search, execute
What is the total number of reachable states for a fifteen puzzle?(n^2)!/2
What is the total number of reachable states for a 3x3 Rubik's cube?(8! · 3^8 · 12! · 2^12)/12
What is the basic idea of tree search algorithms?1. Given a (weighted) graph problem representing an abstraction of the real-world problem and a search strategy 2. Starting from the initial node 3. Simulate the exploration of the graph by iterating the following steps: Select a node for expansion according to the strategy Return the solution path, if the selected node is the goal one Generate successors of selected node (a.k.a. expand states) 4. Return failure, if all nodes were checked and no solution found
What are the dangers of not detecting repeated states?Failure to detect repeated states can turn a linear problem into an exponential one!
Can a node be generated once it is expanded?No. A node is generated only if it was not expanded yet.
What are some examples of search strategies?Completeness does it always find a solution if one exists? Time complexity number of nodes generated/expanded Space complexity maximum number of nodes in memory Optimality does it always find a least-cost solution? Time and space complexity are measured in terms of ? maximum branching factor of the search tree ? depth of the least-cost solution ? maximum depth of the state space (can be ∞)
What are uniformed strategies?Uninformed strategies use only the information available in the problem definition
What are some Uniformed search strategies?Breadth-first search Uniform-first search Depth-first search Depth-limited search Iterative-deepening search
Breadth-first search definition?Expand shallowest unexpanded node fringe is a first in, first out queue (FIFO)
What is a fringe?"fringe" refers to a data structure that stores all the nodes or states that have been discovered but not yet explored.
What is the problem with BFS?Space is the problem.
Uniform-cost search definition?Expand least-cost unexpanded node fringe is a queue ordered by path cost, lowest first
Are Breadth-first search and Uniform-cost search equal?They are equivalent if step costs all equal.
Depth-first search definition?Expand deepest unexpanded node fringe = LIFO queue
What are properties of DFS?Strongly depends whether graph or tree-search is applied Graph-search Avoids repeated states and redundant paths, complete in finite state spaces, but may store every node Tree-search Not complete even in finite state spaces (loops) Modified tree-search Checks new state against those on the path from current node to the root, complete in finite state spaces
Depth-limited search definition?Equals to depth-first search with depth limit ? i.e. nodes at depth ? have no successors
Iterative-deepening search definition?Increases the depth limit to ensure completeness while still being memory-efficient.
Summary1. Graph-search can be exponentially more efficient than tree search 2. Iterative-deepening search uses only linear space and not much more time than other uninformed algorithms 3. Problem formulation usually requires abstracting away real-world details to define a state space that can be explored
What is the difference between graph search and tree search?The only difference between tree search and graph search is that tree search does not need to store the explored set, because we are guaranteed never to attempt to visit the same state twice.