RL Study Notes: Basic Concepts
A summary of core definitions in Reinforcement Learning (State, Action, Reward) and the elements of Markov Decision Processes (MDP).
Basic Concepts#
I. Core Definitions#
-
State The status of the Agent relative to the environment. In a grid world, this is typically considered the Agent’s coordinate location. For example, can be represented as a vector:
-
State space The set of all possible states, denoted as . For example: . It is essentially just a Set.
-
Action The moves an Agent can take for every State. For example, in a grid world, there might be five: up, down, left, right, and stay.
-
Action space The set of all possible actions for a specific state , denoted as . Note: Action often depends on the State, meaning is a function of .
-
State transition The process where the Agent transfers from one state to another after taking an action, denoted as . This defines the mechanism of interaction with the environment. In a virtual world, this can be defined arbitrarily; in the real world, it must obey objective physical laws.
-
State transition probability Uses probability to describe the uncertainty of state transitions. For example, if we choose at , the probability of moving to is:
The example above represents a deterministic environment, but it can also be stochastic (random).
-
Policy The rule or function that describes what action the Agent should take in a given State. For example, a Deterministic Policy:
Stochastic Policies follow the same logic, where represents the probability of selecting an action.
-
Reward A scalar real number received after the Agent takes a specific action.
- Positive numbers typically represent rewards (encouraging behavior);
- Negative numbers typically represent punishment (suppressing behavior).
Reward is a key method of Human-Machine Interface, used to guide the Agent to exhibit the behavior we expect. Mathematical expression:
-
Trajectory A complete State-Action-Reward chain. Specifically: taking an Action in a State, receiving a Reward, and transferring to the next State, repeated in a loop.
-
Return The sum of all Rewards in a Trajectory. Different Policies will lead to different Returns.
-
Discounted Return For a Trajectory that runs infinitely, a simple sum would result in an infinite Return (divergence). We introduce a discount factor :
With :
Example:
- Role of : Determines the Agent’s “vision.” A smaller makes the Agent “near-sighted” (focusing on immediate rewards), while a larger makes it “far-sighted” (focusing on long-term benefits).
-
Episode When interacting with the environment following a Policy, if the Agent stops at a Terminal State, the resulting trajectory is called an Episode (or Trial).
- Episodic Tasks: Tasks with terminal states (finite steps).
- Continuing Tasks: Tasks without terminal states (infinite steps).
II. MDP (Markov Decision Process) Elements#
1. Sets#
- State: The set of States
- Action: The set of Actions , where
- Reward: The set of Rewards
2. Probability Distribution (Dynamics)#
- State transition probability:
- Reward probability:
3. Policy#
- The Agent’s decision mechanism:
4. MDP Property#
Memoryless (Markov Property): The probability of the next state and reward depends only on the current state and action, and is independent of all prior history.
5. MDP vs Markov Process#
- Markov Process: Contains only States and Transition Probabilities. The observer can only passively accept the environment’s evolution based on probability and cannot intervene.
- MDP (Markov Decision Process): Adds Decision (Action). State transitions depend not only on the current state but also on the Action taken. The Agent can actively influence the outcome probabilities by choosing different actions, rather than just passively accepting a fixed distribution.
DOCS
-
CTF WP
-
WEB
-
Reinforcement Learning
-
Miscellaneous