AI alignment is a field of AI safety research focused on developing AI systems to follow the user's desired behavior and achieve their desired outcomes, ensuring the model is "aligned" with human values.