علوم مهندسی کامپیوتر و IT و اینترنت

طراحی عامل ها (Design of Agents)

negahbane_bahush

در نمایش آنلاین پاورپوینت، ممکن است بعضی علائم، اعداد و حتی فونت‌ها به خوبی نمایش داده نشود. این مشکل در فایل اصلی پاورپوینت وجود ندارد.




  • جزئیات
  • امتیاز و نظرات
  • متن پاورپوینت

امتیاز

درحال ارسال
امتیاز کاربر [0 رای]

نقد و بررسی ها

هیچ نظری برای این پاورپوینت نوشته نشده است.

اولین کسی باشید که نظری می نویسد “طراحی عامل ها (Design of Agents)”

طراحی عامل ها (Design of Agents)

اسلاید 1: AI as the Design of AgentsTuomas SandholmCarnegie Mellon UniversityComputer Science Department= a unifying view for the bag of techniques that AI encompasses

اسلاید 2: environmentperceptsactions?agentsensorseffectorsAn agent and its environment

اسلاید 3: How to design an intelligent agent?Definition: An agent perceives its environment via sensors and acts in that environment with its effectors.Hence, an agent gets percepts one at a time, and maps this percept sequence to actions (one action at a time)Properties:AutonomousInteracts with other agents plus the environmentReactive to the environmentPro-active (goal-directed)

اسلاید 4: Agent typePerceptsActionsGoalsEnvironmentMedical diagnosis systemSymptoms, findings, patients answersQuestions, tests, treatmentsHealthy patients, minimize costsPatient, hospitalSatellite image analysis systemPixels of varying intensity, colorPrint a categorization of sceneCorrect categorizationImages from orbiting satellitePart-picking robot Pixels of varying intensityPick up parts and sort into binsPlace parts in correct binsConveyor belts with partsRefinery controller  Temperature, pressure readingsOpen, close valves; adjust temperature Maximize purity, yield, safety Refinery  Interactive English tutor Typed words  Print exercises, suggestions, correctionsMaximize students score on testSet of students  Examples of agents in different types of applications

اسلاید 5: Examples of agents..Agent TypePerceptsActionsGoalsEnvironmentsBin-Picking RobotImagesGrasp objects; Sort into binsParts in correct binsConveyor beltMedical DiagnosisPatient symptomsTests and treatmentsHealthy patientsPatient & hospitalSoftbotWeb pagesftp, mail, telnetCollect info on a subjectInternet

اسلاید 6: Definition of ideal rational agentIdeal Rational Agent: For each possible percept sequence, such an agent does whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has.What do you think? Is this an acceptable definition?Not looking left when crossing the street: If I don’t see a car coming from the left, it is rational to cross the street?Bounded rationalityLimited/costly computation timeLimited/costly memory…No. Should also consider taking information gathering actions.

اسلاید 7: Agent’s strategyAgent’s strategy is a mapping from percept sequence to actionHow to encode an agent’s strategy?Long list of what should be done for each possible percept sequence vs. shorter specification (e.g. algorithm)

اسلاید 8: WARNING: Might not get what you ask for in the performance measure Cleaning robotPick up as much trash as possibleVehicle route optimization Maximize utilizations => Driving fully loadedCapitalizing on oddities in tariff list => RenegotiationDon’t include solution method in the criterion

اسلاید 9: agent = architecture + program Physical agents vs. software agents (software agents = softbots)This course concentrates on the program

اسلاید 10: function SKELETON-AGENT (percept) returns action static: memory, the agent’s memory of the world memory  UPDATE-MEMORY(memory,percept) action  CHOOSE-BEST-ACTION(memory) memory  UPDATE-MEMORY(memory, action) return actionOn each invocation, the agent’s memory is updated to reflect the new percept, the best action is chosen, and the fact that the action was taken is also stored in the memory. The memory persists from one invocation to the next.Skeleton agentInput = Percept, not historyNOTE: Performance measure is not part of the agent

اسلاید 11: Examples of how the agent function can be implemented Table-driven agentSimple reflex agentReflex agent with internal stateAgent with explicit goalsUtility-based agentMore sophisticated

اسلاید 12: function TABLE-DRIVEN-AGENT (percept) returns action static: percepts, a sequence, initially empty table, a table, indexed by percept sequences, initially fully specified append percept to the end of percepts action  LOOKUP(percepts, table) return actionAn agent based on a prespecified lookup table. It keeps track of percept sequence and just looks up the best action1. Table-driven agent Problems Huge number of possible percepts (consider an automated taxi with a camera as the sensor) => lookup table would be huge Takes long time to build the table Not adaptive to changes in the environment; requires entire table to be updated if changes occur

اسلاید 13: 2. Simple reflex agentDiffers from the lookup table based agent is that the condition (that determines the action) is already higher-level interpretation of the perceptsPercepts could be e.g. the pixels on the camera of the automated taxi

اسلاید 14: Simple Reflex AgentsensorsWhat the world is like nowWhat action I should do nowCondition - action ruleseffectorsEnvironmentfunction SIMPLE-REFLEX-AGENT(percept) returns action static: rules, a set of condition-action rules state  INTERPRET-INPUT (percept) rule  RULE-MATCH (state,rules) action  RULE-ACTION [rule] return actionA simple reflex agent works by finding a rule whose condition matches the current situation (as defined by the percept) and then doing the action associated with that rule.First match.No further matches sought.Only one level of deduction.

اسلاید 15: Simple reflex agent…Table lookup of condition-action pairs defining all possible condition-action rules necessary to interact in an environment e.g. if car-in-front-is-breaking then initiate breakingProblemsTable is still too big to generate and to store (e.g. taxi)Takes long time to build the tableNo knowledge of non-perceptual parts of the current stateNot adaptive to changes in the environment; requires entire table to be updated if changes occurLooping: Can’t make actions conditional

اسلاید 16: 3. Reflex agent with internal statesensorsWhat the world is like nowWhat action I should do nowCondition - action ruleseffectorsEnvironmentStateHow the world evolvesWhat my actions do

اسلاید 17: Reflex agent with internal state …function REFLEX-AGENT-WITH-STATE (percept) returns action static: state, a description of the current world state rules, a set of condition-action rules state  UPDATE-STATE (state, percept) rule  RULE-MATCH (state, rules) action  RULE-ACTION [rule] state  UPDATE-STATE (state, action) return actionA reflex agent with internal state works by finding a rule whose condition matches the current situation (as defined by the percept and the stored internal state) and then doing the action associated with that rule.

اسلاید 18: Encode “internal state of the world to remember the past as contained in earlier percepts Needed because sensors do no usually give the entire state of the world at each input, so perception of the environment is captured over time. “State” used to encode different “world states” that generate the same immediate percept Requires ability to represent change in the world with/without the agent one possibility is to represent just the latest state, but then cannot reason about hypothetical courses of action Example: Rodney Brook’s Subsumption Architecture. Main idea: build complex intelligent robots by decomposing behaviors into a hierarchy of skills, each completely defining a complete percept-action cycle for one very specific task. For example, avoiding contact, wandering, exploring, recognizing doorways, etc. Each behavior is modeled by a finite-state machine with a few states (though each state may correspond to a complex function or module). Behaviors are loosely-coupled, asynchronous interactionsReflex agent with internal state …

اسلاید 19: 4. Agent with explicit goalssensorsWhat the world is like nowWhat action I should do nowGoalseffectorsEnvironmentStateHow the world evolvesWhat my actions doWhat it will be like if I do action A

اسلاید 20: Agent with explicit goals … Choose actions so as to achieve a (given or computed) goal = a description of desirable situations. e.g. where the taxi wants to go Keeping track of the current state is often not enough – need to add goals to decide which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is achieved – involves considerations of the future, “what will happen if I do…?” (search and planning) More flexible than reflex agent. (e.g. rain / new destination) In the reflex agent, the entire database of rules would have to be rewritten

اسلاید 21: 5. Utility-based agentsensorsWhat the world is like nowWhat action I should do nowUtilityeffectorsEnvironmentStateHow the world evolvesWhat my actions doWhat it will be like if I do action AHow happy I will be in such as a state

اسلاید 22: Utility-based agent … When there are multiple possible alternatives, how to decide which one is best? A goal specifies a crude destination between a happy and unhappy state, but often need a more general performance measure that describes “degree of happiness” Utility function U: State  Reals indicating a measure of success or happiness when at a given state Allows decisions comparing choice between conflicting goals, and choice between likelihood of success and importance of goal (if achievement is uncertain)

اسلاید 23: Properties of environments

اسلاید 24: Properties of environmentsAccessible (observable) : The agent’s sensory apparatus gives it access to the complete state of the environmentDeterministic: The next state of the environment is completely determined by the current state and the actions selected by the agentSubjective non-determinism - Limited memory (poker) - Too complex environment to model directly (weather, dice) - InaccessibilityEpisodic: The agent’s experience is divided into independent “episodes,” each episode consisting of agent perceiving and then acting. Quality of action depends just on the episode itself, because subsequent episodes do not depend on what actions occur in previous episodes.  Do not need to think ahead

اسلاید 25: Properties of environments …Static: If the environment can change while the agent is deliberating, then the environment is dynamic; otherwise it’s static. Discrete: There are a limited number of distinct, clearly defined percepts and actions we say that environment is discrete.DynamicNeed to worry about timeNeed to observe while deliberating

اسلاید 26: EnvironmentAccessibleDeterministicEpisodicStaticDiscreteChess with a clockYesYesNoSemiYesChess without a clockYesYesNoYesYesPokerNoNoNoYesYesBackgammonYesNoNoYesYesTaxi drivingNoNoNoNoNoMedical diagnosis systemNoNoNoNoNoImage-analysis systemYesYesYesSemiNoPart-picking robotNoNoYesNoNoRefinery controllerNoNoNoNoNoInteractive English tutorNoNoNoNoYes

اسلاید 27: procedure RUN-ENVIRONMENT (state, UPDATE-FN, agents, termination) inputs: state, the initial state of the environment UPDATE-FN, function to modify the environment agents, a set of agents termination, a predicate to test when we are done repeat for each agent in agents do PERCEPT[agent]  GET-PERCEPT(agent,state) end for each agent in agents do ACTION[agent]  PROGRAM[agent] (PERCEPT[agent]) end state  UPDATE-FN(actions, agents, state) until termination (state)Running the agents and the environment

34,000 تومان

خرید پاورپوینت توسط کلیه کارت‌های شتاب امکان‌پذیر است و بلافاصله پس از خرید، لینک دانلود پاورپوینت در اختیار شما قرار خواهد گرفت.

در صورت عدم رضایت سفارش برگشت و وجه به حساب شما برگشت داده خواهد شد.

در صورت نیاز با شماره 09353405883 در واتساپ، ایتا و روبیکا تماس بگیرید.

افزودن به سبد خرید