%%alias Aliased from [Robot Controllers|RobotControllers] %% There's a number of different ways to control a robot, but let's first list the different levels of autonomy ("freedom from external control or influence; independence"). The Society of Automotive Engineers (SAE) proposed a five level scale for autonomous vehicles. I'll adapt that to robotics specifically below: * __Level 0__: not autonomous, fully remote-controlled; the robot cannot operate on its own and simply responds to external controls, like from a joystick or control box. E.g., a radio-controlled car or airplane. * __Level 1__: the robot can perform a single task without guidance, once that task is initiated. In performing that task it may not even take sensor or environmental data into consideration. * __Level 2__: navigation is largely automated, and the robot uses sensor data to handle obstacle avoidance, but the overall control of the robot and any tasks are remotely directed. This includes things like telerobotic submarines, drones and the like, where they are guided externally but can perform the task of navigation on their own, e.g., guidance by a human over a video image, moving between GPS coordinates. * __Level 3__: in robotics this is called telerobotics: like Mars rovers, where it takes about 13 minutes to send a signal to Mars, any robots need to be semi-autonomous and able to navigate and perform any directed tasks (like performing Level 1 experiments), but also need guidance in doing so. * __Level 4__: fully autonomous: the robot requires no external guidance and can operate entirely on its own. It may receive a task to perform remotely, but it can perform that task without any help. As with the SAE definition, Level 4 robots can operate safely within a given, limited environment. * __Level 5__: the same as Level 4 but can operate in almost any environment, including uncontrolled places like a forest or cityscape. That stated, there's a few different approaches to ''autonomous behaviour''. These come out of the history of robotics research, such as the MIT Robotics Lab. That said, there's a few different approaches to autonomous behaviour. Some of these come out of the history of robotics research, such as the MIT Robotics Lab. ! 1. Deliberative (Hierarchical) Control * __Structure__: Sense → Plan → Act * __Description__: ** __Deliberative control__ systems follow a structured “sense-plan-act” approach, where the robot first gathers sensor data, builds an internal model of the environment, and then uses that model to generate a plan before executing any actions. This architecture excels in predictable, well-structured environments and is well-suited for complex tasks that require strategic decision-making, such as path planning or task scheduling. However, its reliance on accurate models and heavy computation makes it less responsive to rapidly changing conditions or environments with high uncertainty. * __Features__: ** Uses detailed models of the world. ** Performs long-term planning before acting. * __Pros__: ** Capable of sophisticated reasoning. ** Good for tasks requiring long-term planning. * __Cons__: ** Computationally intensive. ** Not suitable for dynamic or unpredictable environments. * __Example__: A robot planning an entire route before moving. ! 2. Reactive Control * __Structure__: Direct mapping from sensor input to actuator output. * __Description__: ** __Reactive control__ bypasses complex modeling and long-term planning by responding directly to sensory inputs with predefined behaviors or control rules. These systems are designed for speed and robustness, especially in dynamic or uncertain environments, where quick decisions are critical. Instead of maintaining an internal map, a reactive robot might immediately turn away from obstacles or follow light sources based solely on current sensor readings. While fast and simple, reactive systems can struggle with tasks that require coordination over time or consideration of distant goals. * __Features__: ** Ignores internal models and global planning. ** Based on behaviors triggered by environmental stimuli. * __Pros__: ** Fast response times. ** Robust in dynamic environments. * __Cons__: ** Limited long-term planning. ** Can struggle with complex tasks. * __Example__: A robot avoiding obstacles by turning when it detects something in its path. ! 3. Hybrid Control * __Structure__: Combines deliberative and reactive layers. * __Description__: ** __Hybrid control__ systems aim to combine the strengths of both deliberative and reactive approaches by layering a high-level planner over low-level reactive behaviors. The planning layer is responsible for setting goals and generating overall strategies, while the reactive layer handles immediate responses to the environment. This architecture allows for strategic decision-making without sacrificing responsiveness to new or unexpected stimuli. Despite its power and flexibility, designing an effective hybrid system can be complex due to challenges in coordinating the interactions between layers. * __Features__: ** Typically includes a high-level planner and low-level reactive behaviors. * __Pros__: ** Balances planning and responsiveness. ** Versatile across various tasks and environments. * __Cons__: ** Can be complex to design and integrate. * __Example__: A robot that plans a general route but uses local behaviors to handle immediate obstacles. ! 4. Behavior-Based Control * __Structure__: Set of behaviors running in parallel; behavior arbitration decides which to activate. * __Description__: ** __Behavior-based control__ organizes robot intelligence as a collection of independent behaviors, each responsible for a specific function like obstacle avoidance, goal-seeking, or wall-following. These behaviors run in parallel, and a behavior arbitration mechanism determines which one controls the robot at any moment. The architecture emphasizes modularity and robustness, enabling robots to exhibit complex, adaptive behavior through simple interactions. However, because behaviors can interact in unpredictable ways, designing and tuning a behavior-based system often requires extensive testing and empirical refinement. See also: [BehaviourBasedSystems] * __Features__: ** Each behavior handles a specific task (e.g., follow wall, avoid obstacle). * __Pros__: ** Modular and robust. ** Adaptable to changing conditions. * __Cons__: ** Behavior coordination can be tricky. ** Emergent behavior is hard to predict. * __Example__: Subsumption architecture (e.g., Rodney Brooks' robots). ! 5. Learning-Based Control * __Structure__: Uses machine learning models to learn control policies. * __Description__: ** __Learning-based control__ relies on data-driven methods—such as supervised learning, unsupervised learning, or reinforcement learning — to develop control policies that map sensory input to actions. These systems are particularly useful in environments that are too complex or poorly understood to model explicitly. By training on real or simulated data, learning-based robots can develop skills like object recognition, navigation, or manipulation. While offering adaptability and generalization, these systems often require significant amounts of data, and their decision-making processes may lack transparency, raising concerns about safety and interpretability. * __Features__: ** Can be supervised, unsupervised, or reinforcement learning. * __Pros__: ** Capable of adapting to unknown or complex environments. ** Improves with experience. * __Cons__: ** Requires large datasets or training time. ** Often lacks interpretability and safety guarantees. * __Example__: Deep reinforcement learning for robotic grasping. ! 6. Model Predictive Control (MPC) * __Structure__: Solves an optimization problem over a moving time horizon. * __Description__: ** __Model Predictive Control__ (MPC) involves solving an optimization problem at each control step to determine the best sequence of actions over a finite future horizon, based on a predictive model of the system’s dynamics. By continuously updating this plan as new sensor data arrives, MPC allows for precise control that can handle system constraints and anticipate future events. This makes it ideal for tasks like trajectory tracking or collision avoidance in autonomous vehicles. However, the computational load of real-time optimization can limit its use in systems with tight timing constraints or limited processing power. * __Features__: ** Uses a dynamic model to predict future states. * __Pros__: ** Precise and optimal over short horizons. ** Handles constraints well. * __Cons__: ** Computationally demanding in real time. * __Example__: Autonomous cars using MPC for path tracking. !! Comparison Here's a comparison of these approaches: || Approach || Planning Capability || Reactivity || Modularity || Complexity || Typical Applications || Remote-Control | None | None | Varies | Varies | Radio-controlled vehicles, bomb disposal robots, toys || Deliberative | High | Low | Low | High | Long-term navigation, strategic task planning || Reactive | None | High | Moderate | Low | Obstacle avoidance, simple mobile robots || Hybrid | High | High | High | High | Service robots, autonomous vehicles, drones, search and rescue || Behavior-Based | Low–Moderate | High | High | Moderate | Exploration robots, mobile agents || Learning-Based | Varies (data-dependent) | Moderate–High | Low–Moderate | High | Robot manipulation, visual navigation, adaptive tasks || Model Predictive (MPC) | High (short-term horizon) | Moderate | Low | High | Path tracking, autonomous driving, quadrotor control Each approach excels in different contexts depending on the environment, task complexity, and hardware constraints. ---- See also: * [Subsumption Architecture|SubsumptionArchitecture] * [PID Controller|PidController] * Sensor Libraries (generally listed along with the respective sensor) ---- [{Tag Software RobotControlSystems}]