top of page

Gameplay Animation: Understanding the Differences Between Player and NPC


When I conduct interviews or review portfolios, I often see the same pattern.

Player characters that move beautifully. Smooth combos. Sharp dashes.

But almost never NPCs.


And it makes sense. In school, we learn to animate "a character."

We work on a cycle, an attack, locomotion—without implementation context.


But in the studio, the first assignment is rarely what we imagine.

It's not the hero. It's not the final boss.

It's often... an NPC.


And that's when many juniors discover the rules have changed.

  • The player acts. The NPC reacts.

  • The player controls. The NPC signals.

  • The player must feel powerful. The NPC must be readable, fair, anticipatable.


Not understanding this distinction means spending weeks correcting animations that "don't work in gameplay"... without knowing why.

It's not just about style.


It's about logic, control systems, and responsibility.

In this article, I share the concrete differences I wish I'd known early in my career.


The ones that aren't in tutorials.

The ones that make animations "work in gameplay"—or not.





Two Animation Logics: Control or Readability


Player Side: Control Priority


The player is in command. Every animation must extend their intention, with no perceptible latency. Beauty isn't the priority, feel is.


  • Triggering must be immediate. The animation starts on input, not 10 frames later.

  • Anticipation is reduced to the bare minimum. No wind-up, no voluntary delay.

  • Recovery must be interruptible to allow combos or cancels.

  • Blend per bone allows aiming while running, jumping while looking elsewhere—control is priority.

  • Root motion must be mastered: clean transitions between in-place and moving animations, without sliding or teleporting.

  • The system must handle cancel windows and input buffering to smooth out chains.


Everything is designed so the player feels powerful, responsive, and in control.

And every micro-latency is visible and felt.


Ori perfectly demonstrates the notion of player control

NPC Side: Gameplay Readability Priority


The NPC doesn't respond to input. It executes an intention. Its animation doesn't serve to "look pretty", it serves to signal an action.

  • "I'm going to attack."

  • "I'm looking for you."

  • "I'm vulnerable now."


If these intentions aren't readable, combat becomes unfair. The player can't anticipate, react, or punish, and the experience becomes frustrating.


But it's not just about readability. There are technical constraints nobody takes the time to explain.

  • The impact zone is defined by the game designer: range, timing, punish window. If the animation moves the NPC at the wrong time or in the wrong direction, the attack becomes either too forgiving or too punishing. The player can get hit while out of range. That's unfair.

  • Loops must be clean. A player never stays idle for more than a few seconds. An NPC can wait a long time. If the idle animation has a micro-glitch every 3 seconds, it shows. And it breaks immersion.

  • Root motion must be precise. The NPC must stop in the right place, at the right angle, with exact control of its position. If the character slides after stopping, even slightly, its chains will be offset, its interactions miscalibrated, and the experience becomes weird.


The NPC isn't there to shine. It's there to serve the gameplay. And that requires a technical rigor many underestimate.


NPCs in TLOU2 are particularly aware of their environment

Engine Implementation: Two Decision Chains


We often hear: "the player has blend spaces, the NPC has state machines."

That's true, but it's too vague. In production, these are two radically different logics.


The Player: Direct Control

The player presses a button. The system detects the input. It triggers the animation.

It's direct. It's fast. It's responsive.

To manage this, we use:

  • Motion matching: dynamic selection from an animation database based on player state

  • Blend per bone: the torso can aim independently from the legs

  • Cancel windows: frames dedicated to interruption or chaining

  • Input buffering: anticipating the next input to smooth transitions

  • Mastered root motion: clean transitions between in-place and moving animations

Everything is optimized so the player feels no latency.

Animation serves control.


The NPC: Indirect Control

The NPC doesn't respond to input. It executes an intention.

And that intention passes through several layers:

  • AI: perception → decision ("attack the player")

  • Behavior tree: selection of appropriate behavior

  • State machine: transition between states ("patrol" → "attack")

  • NavMesh: pathfinding calculation if movement required

  • Animation: triggered based on behavior and position

The NPC can't be as reactive as the player—and that's not a problem.

It's a systemic logic, designed for readability and coherence.



An Interdisciplinary Complexity


Implementing an NPC isn't just setting keys in a cycle.

It's collaborating with:

  • Programmers: AI control, transitions, engine synchronization

  • Game designers: navmesh calibration, interaction logic, gameplay intention

  • Level designers: point of interest positioning, spatial coherence

  • Animation: readability, anticipation, coherence with gameplay timings


That's why implementing an NPC alone is rarely viable.

The player can be animated with relative autonomy, but the NPC is a systemic object, dependent on the entire production chain.



What About Schedules?


We sometimes talk about schedules for NPCs, but in reality, it's most often simulated or contextually triggered states, not rigid time-based routines.


  • In most projects, transitions between states ("patrol," "idle," "interact," "alert") are driven by gameplay, not time of day

  • These transitions must be cleanly interruptible (e.g., if the player attacks during an interaction)

  • Each state may require specific animations, but the schedule is generally systemic, not scripted


That said, some projects go further.

On Ghost Recon Wildlands, for example, civilians had real hourly schedules: work, lunch break, rest, based on time of day.

But most productions don't have this granularity.


The player has no schedule. They act freely, without routine constraints.


The NPC, however, must integrate into the world with coherent, interruptible, and sometimes simulated behaviors.


RDR2's NPCs have their own lives

Interactions: Where NPCs Become Complex


This is rarely covered in training, yet it's where you'll spend a good part of your time in production.


NPCs don't just move. They interact with the world: objects, characters, environments and each interaction poses specific constraints.


Interaction with Objects


Let's take a simple example: opening a door.

  • The NPC must position precisely in front of the door (snap to position)

  • The animation must end in the right place, or they walk through the door

  • If the door swings, you must synchronize the opening with the arm movement

  • If there are multiple door types (single, double, sliding), you need variants or adaptive systems


Player side:

  • Interaction is often more flexible, with systems that tolerate slight imprecision

  • Animation can adapt dynamically but remains driven by player control

  • The goal is to preserve fluidity and feel without breaking gameplay

NPC side, we expect spatial and temporal precision.



Opening doors in video games...


Interaction with Characters


Two NPCs talking. Or an NPC talking to you.

  • Proxemics: not too close (awkward), not too far (loss of readability)

  • Gaze: the NPC must follow your character with their eyes, not stare into space

  • Posture: the torso must orient toward you, not just the head—otherwise the angle becomes too restrictive


These elements are generally handled by enhanced look-at systems, combined with priority rules and blend per bone.


But their calibration varies enormously from project to project:

  • Too subtle, the gaze becomes invisible or even disturbing (the character seems to talk into space, without connection)

  • Too rigid, the posture becomes unrealistic or impossible to maintain

  • Poorly synchronized, the torso stays frozen and breaks the coherence of the interaction


It's not rare technology, it's the quality of implementation that makes the difference.

When these interactions involve the player, they're generally limited or handled by cinematics.


For NPCs, they must work in real-time, in all cases, with constraints of distance, gaze, posture, and engine synchronization.


Interaction with Environment


An NPC sits on a bench.

But in the game, there are 15 visual variants: different heights, armrests, lengths, styles.

Yet you can't create a single generic "sit down" animation.


You must choose a strategy:

  • Create one animation per bench type → heavy in production

  • Use procedural IK to adapt the pose → complex to stabilize

  • Accept an approximation → often rejected by the game designer


A fourth option is to normalize the meshes:

Keep identical geometry (height, depth, sitting position) and vary only the visuals.

This allows reducing the number of animation variations while maintaining aesthetic diversity.


It's an effective solution, but it must be anticipated upstream, from a design, tech, and animation perspective.


The player? They rarely sit. And when they do, it's a cinematic. Not gameplay.



Field Experience: When the Difference Becomes Concrete


I've had the opportunity to animate both player characters and NPCs on several productions.

And it's in the studio that the distinction becomes obvious, not theoretical, but systemic.


Player Character: Rich Variations, Demanding Integration

On the main character, each gameplay context can modify locomotion.

Fatigue, tension, environment, injury—everything is reflected in the character's attitude.

  • The player can walk more slowly to avoid making noise

  • They can limp if injured, or speed up if in danger

  • These variations reinforce immersion, but they must remain coherent with each other

And that's where the real challenge begins:

A walk with crossed arms won't blend properly with a start/stop with arms at the sides.

You need to implement intelligently, to enrich without multiplying animations.

It's not just "more content." It's more integration logic.


Locomotion variations on Beyond Two Souls


NPCs: Strategic Simplification, Targeted Credibility

NPCs are often more numerous, less visible, less prioritized.

But their role remains essential: they populate the world, provide rhythm, support gameplay.

  • One or two variants suffice in most cases

  • Their priority: be readable, coherent, credible—without overloading production

  • Too many poorly maintained animations = jumping loops, visual bugs, broken immersion

That said, the current trend is evolving.

We're trying to make NPCs more aware of their environment, more reactive, more credible.

Without falling into complexity overload.

A good NPC is one that works well, even with few animations.



Ghost Recon Breakpoint: Maximum Fluidity on Player Side

On Ghost Recon, the main character used motion matching:

thousands of animations dynamically selected based on player state.

  • Exceptional fluidity

  • Frame-perfect responsiveness

  • Feel prioritized

The system was designed to extend player intention without latency.

Every input had to trigger an immediate, coherent, and fluid response.

NPC side, we stayed with classic state machines:

calibrated cycles, optimized mocap, robust logic.

Why? Because:

  • The player feels every micro-latency

  • For NPCs, nobody notices if there's 100ms delay

  • Implementing motion matching for 50 NPC types is complexity overload

Two architectures. Two logics. Two levels of requirement.


Player and NPC movement in GR Breakpoint



Prince of Persia: The Lost Crown: Calibrating the Threat

On PoP, the question came up often:

"Should this NPC move faster or slower than the player?"

  • Faster → constant pressure, direct threat

  • Slower → the player can manage distance, anticipate

Speed wasn't a detail.

It defined the type of threat, combat rhythm, reaction margin.

And animation followed that intention:

clear anticipation, visible punish window, coherence with hitboxes.

Animation isn't decorative. It serves gameplay.


Prince of Persia, at the service of gameplay




Best Practices: What to Aim For


Player Character

  • Fluid responsiveness: transitions without latency, input always priority

  • Blend per bone: independent actions (torso, arms, legs) to preserve control

  • Mastered root motion: clean transitions between in-place and moving animations

  • Cancel windows and buffering: frame-perfect logic for chains

  • Testing in gameplay context: validation with input, camera, feedback


NPC

  • Behavioral readability: clear anticipation, explicit posture, visible recovery

  • Robust loops: glitch-free idle, stable cycles over long duration

  • Calibrated interactions: gaze, posture, social distance, environment synchronization

  • Precise root motion: spatial coherence for transitions and hitboxes

  • Respected schedule: clean transitions between states, managed interruptions


These best practices aren't "tips", they're validation criteria.

In the studio, this is what makes the difference between a "pretty" animation and a functional one



Red Flags: What to Avoid


Player

  • ❌ "Pretty" animation but not tested in gameplay → control rupture

  • ❌ Poorly calibrated blend → sliding, teleporting, visual incoherence

  • ❌ No cancel window → blocked input, immediate frustration

  • ❌ Unnecessary visual anticipation → slows feel without gameplay benefit

  • ❌ Neglected root motion → broken transitions between in-place and moving animations


NPC

  • ❌ No anticipation → unfair, unreadable attack

  • ❌ Idle loop with micro-glitch → "visual bug" effect, loss of immersion

  • ❌ Fixed or misoriented gaze → creepy effect, social incoherence

  • ❌ Unsynchronized environment interaction → NPC walks through a bench, a door, an object





Conclusion: Two Roles, Two Responsibilities


Animating a player character and animating an NPC isn't "the same job with less budget."


It's a difference in logic, control, and function.

  • The player acts: requires control, responsiveness, immediate feedback

  • The NPC reacts: must be readable, credible, coherent with the system

  • The player traverses the world. The NPC lives in it—with a schedule, interactions, constraints


It's not about complexity.

It's about function.


And in Interviews?


Don't just say "I animated a character."


Formulate the professional logic:

"I animated a player character, so I optimized for responsiveness: no anticipation, minimal recovery, blend spaces for fluidity.

But I understand that for an NPC, the priorities are different: readability, visible anticipation, coherence with impact zones defined by design."


You've just shown you understand gameplay stakes.

And you're ahead of 90% of candidates who just say "I like animating."


Understanding this distinction means not only animating better, but also collaborating better with design, code, QA, and defending your choices better in production and interviews.






Comments


Services

Consulting

Coaching

Courses

Courses

Contact :

60 rue François 1er

75008 Paris

Mon. - Fri. : 8h30h - 19h 

06.21.44.27.59

Policy

Legal information

TCS

Privacy policy

Refund policy

Cookie policy

Terms of Use

FAQ

© 2025 by AniMotion. Created with Wix.com

bottom of page