Gameplay Animation: The 5 Technical Decisions That Determine Your Production
- Vanessa

- Oct 14, 2025
- 8 min read
Updated: Dec 4, 2025
A few weeks ago, I was urgently contacted to help on a gameplay animation project.
The game releases in 3 months. Feedback highlighted numerous weaknesses in the animations.
Analyzing their work, I quickly understood: the problem wasn't the animators.
It was the decisions made 18 months earlier, in preproduction.
Technical decisions that seemed harmless at the time, but had locked the entire production into an unsuitable system.
At 3 months from release, it was too late to change anything. We could only apply band-aids.
Here are the 5 technical decisions that determine the success or struggles of a gameplay animation production.
Download the Technical Validation Checklist (Pipeline category)
1. Framerate Target: The Invisible Decision That Changes Everything
Why it's critical
30fps vs 60fps isn't just about technical performance. It's about motion perception.
At 30fps, each frame is spaced about 33 milliseconds apart. This allows time to build strong poses, readable anticipations, and well-paced timing.
At 60fps, the interval drops to 16 milliseconds: the same animations can appear too slow, too soft, or conversely jerky if transitions aren't adjusted.
It's not just about fluidity, it's about motion perception. The framerate changes how the brain reads the intention behind each pose.
And the reverse is true: an animation designed for 60fps played at 30fps loses its fluidity.
My experience
On a recent project, we were animating at 30fps in Maya but the engine was running at 60fps.
Interpolations between frames created artifacts impossible to predict in our animation software.
Objects disappeared for a frame, constraints desynchronized, rotations took aberrant paths.
Result: several weeks lost debugging problems invisible in Maya.
We exported, tested in-game, found the bug, returned to Maya unable to see it, attempted blind corrections.
The decision to target 60fps had been mentioned from the start... but without concrete testing or technical validation.
It was officially confirmed six months after production began, when over 200 animations were already in progress.
At that stage, we could no longer technically change our approach. The adjustments needed to adapt timings and correct interpolation artifacts had become very costly, and often invisible in our animation software.
We struggled until the end of production with the same issues.
Red flags
🚩 Framerate not defined at the start of preproduction
🚩 "We'll optimize the framerate later"
🚩 No direct testing pipeline from Maya/Blender/Mobu to engine
🚩 Technical team says "it should work" without concrete testing
Questions to ask
What is the definitive TARGET framerate?
Am I working at the same framerate as the engine?
What is the real-time testing workflow?
Who handles interpolation on export?
2. Mocap Choice: Saving Money in the Wrong Place
Why it's critical
There are several motion capture technologies, each suited to specific uses:
Optical: based on infrared cameras (like Vicon), offers high precision but requires dedicated space and can be sensitive to occlusions.
Inertial: relies on IMU sensors (like Xsens), more mobile and quick to set up, but sometimes less precise on complex movements.
Video: uses image analysis to extract movement, often less expensive but limited in quality and reliability.
Each system has its advantages and limitations.
The choice depends on the type of animations to capture, budget, and expected post-production quality level.
The trap? Choosing an "economical" system without measuring the real cost in post-production.
Because mocap is never "shoot and use directly."
It's always: shoot → clean the data → refine → integrate.
And depending on the system, the shooting/cleaning ratio can explode.
My experience
On a large AAA project, the choice fell on an inertial mocap system (Xsens) for locomotion on slopes and varied terrain.
The idea: save on shooting costs.
The reality: feet and pelvis shaking every frame. Height tracking approximative (crucial for slope locomotion).
Data generally usable but requiring enormous cleaning work.
We saved about 20% on shooting to lose 200% in post-prod time.
Weeks of a senior animator cleaning data frame by frame without achieving the quality level we would have had with an optical system.
The problem wasn't the system itself, Xsens works very well for certain uses.
The problem was using it for a use case where it wasn't optimal.

Red flags
🚩 Someone says "we'll save money on mocap"
🚩 System choice based solely on shooting cost
🚩 No quality testing before launching full production
🚩 Post-prod budget not clearly defined
Questions to ask
What type of animations must we capture? (fast combat, locomotion, acrobatics...)
Which system is optimal for this type?
What is the TOTAL budget (shooting + post-prod)?
Who will handle cleaning and how much time is budgeted?
3. Blending & State Machines: Once Chosen, Impossible to Go Back
Why it's critical
Your state machine system, animation graph, blend trees... is the backbone of all your gameplay.
It's what manages how animations chain together, blend, overlap.
Once you have 500 animations integrated into a system, you can't redo everything.
The system becomes structural.
If you realize mid-production that it's limiting, it's already too late.
Three opposite experiences
The flexible system (Beyond: Two Souls)
In-house system at Quantic Dream that allowed adding as many transitions as needed, at any moment in production.
Sophisticated blend trees, automatic foot management, ability to go from gameplay to cinematic seamlessly.
Result: no foot sliding, no broken transitions, ability to adjust and improve until the last day.
The catastrophic rigid system (independent project)
Choice of simple animation sequences in Unreal, triggered directly via Blueprints, without going through a State Machine.
The initial idea: "We'll keep it simple, we don't need complexity."
Concretely, each animation (idle, walk, jump...) is called manually, without transition logic or blending.
No conditions, no flow visualization, just direct calls like "Play animation" or "Set animation."
Three months from the end, impossible to add transitions without breaking everything.
Characters sliding everywhere, idle → walk transitions with poorly managed speeds, abrupt stops.
And above all: structurally impossible to correct without redoing everything, because no intermediate layer allows managing interruptions, blend times, or priorities.
This type of system can work for a prototype, but in production, it quickly becomes a trap: each animation is isolated, and the slightest adjustment requires manually rewiring each sequence.
The game shipped with these flaws. Reviews massacred the animations...
The limiting system (AAA project)
No automatic foot detection system for two consecutive productions.
Direct consequence: impossible to guarantee clean arrivals on the correct foot, or to precisely calibrate NPC speed so they stop exactly where desired.
Without this detection, you can blend between two animations of different speeds, but you can't dynamically adapt speed based on step rhythm.
However, this adaptation is much more visually qualitative than simple linear blending.
Result:
Imprecise stops, visible sliding, clunky transitions
And to compensate, dozens of manual "oriented stop" animations, multiplying work by three

Today, Unreal Engine blueprints are widely known and mastered in the industry.
But there's great diversity of systems, from Unity to proprietary engines, each with its technical specificities and integration constraints.
Over the course of projects, I've had the chance to work on very varied technologies.
This experience now allows me to anticipate pitfalls linked to each pipeline, ask the right questions from preproduction, and adapt technical choices to real gameplay needs.

Red flags
🚩 Tech lead says "we'll see about transitions later"
🚩 No state machine, blueprint, or blending system defined in preproduction
🚩 "We'll keep it simple" on an action game
🚩 First tests already show foot sliding
Questions to ask
Does the system allow easily adding transitions during production?
Is there a foot detection system / foot IK?
How do we handle blends between very different animations?
Are there debug tools to visualize transitions?
4. Props/Weapons Management: The Poorly Anticipated Daily Hell
Why it's critical
Picking up an object. Holding it. Putting it down. Changing hands. Throwing it. Storing it.
These actions potentially represent 30-40% of your gameplay animations.
If the technical system behind it is shaky, it's a daily nightmare throughout production.
And unlike other systems, you really only test this one in production, when you start having dozens of different objects, edge cases, complex interactions.
Three implementations, three experiences
The fluid system
Ultra well-designed system. Multiple sockets on the character, stable constraint system, ability to easily switch from one object to another.
Never had a single problem picking up, putting down, manipulating objects.
Hundreds of animations with props, zero bugs. It just worked.
The rigid system
Complex system in MotionBuilder supposed to simplify the process.
In reality: too rigid.
Whenever we wanted to change weapon hand after the fact, struggle.
Whenever we wanted to replace the weapon model on an existing anim, everything broke.
And interactions between characters (passing an object, etc.) were nearly impossible.
Result: we avoided certain animations that were relevant for gameplay because we knew that technically, it would be hell.
The permanent chaos
Not thought through enough in preproduction.
Entire production dealing with constraint problems that break.
Objects teleporting from one frame to another. Disappearing. Doing 360° in one frame.
Hours lost every week debugging, finding workarounds, redoing animations that worked the day before.
It's a colossal time loss at project scale.

Red flags
🚩 First object tests show bizarre teleportations
🚩 "We'll see about that later" when you ask about props workflow
🚩 System never tested with multiple different object types
🚩 No clear pipeline for attaching/detaching props
Questions to ask
How do we technically attach/detach a prop?
Can we change objects on an existing anim?
Does the system handle interactions between characters?
Have there been tests with different object types?
5. Rig/Skeleton Choice: The Technical Foundation Not to Neglect
Why it's critical
A rig is your character's technical structure. Its bone hierarchy, controllers, deformations.
Changing a rig = redoing all animations using it. The further you are in production, the more catastrophic.
The trap? A rig can seem "good" in basic tests, and reveal its limitations only when you start doing complex animations, extreme poses, interactions.
The choice of skeleton count, their definition (size, proportions), and retargeting strategy are equally critical.
Deciding in preproduction how many distinct skeletons the game will have (one universal skeleton for all characters? One per body type? One per character?) directly impacts animation workload.
A well-designed universal skeleton allows sharing maximum animations between characters, drastically reducing production costs.
Conversely, multiplying skeletons without a clear retargeting strategy can explode the animation budget: each new body type requires redoing or manually adapting dozens of animations.
Not to mention that proportion differences (small vs tall character, thin vs massive) influence retarget quality: what works perfectly on one skeleton can look broken on another.
These structural decisions must be made in preproduction, because once 200+ animations are produced on incompatible skeletons, it's too late to unify.
My experience
On a recent project, the main character's rig was redone 3 times.
First time at 6 months: we discovered certain deformations didn't work well.
Second time at 12 months: certain pushed poses still broke the rig.
Third time at 18 months: complete overhaul for skeleton optimization reasons.
Each time: redoing dozens of animations already validated, already integrated, already tested.
The worst? These rig problems were predictable. But no one had taken the time to test them thoroughly in preproduction.


Red flags
🚩 Rigger says "I'll improve the rig during production"
🚩 Tests limited to static poses or simple cycles
🚩 No functional validation before starting animations
🚩 No verification of extremes (stretch, twist, contact)
Questions to ask
Is the rig final or will it evolve?
Has it been tested on all game action types?
Who validates that the rig no longer changes?
What is the absolute deadline for rig changes?
Conclusion: 20% of Time, 80% of Impact
These 5 technical decisions are made in a few weeks of preproduction.
They determine 2 years of production.
The difference between smooth production and nightmarish production doesn't depend on animator talent.
It depends on the quality of technical decisions made before even starting to animate.
Preproduction isn't "the moment when we do tests."
It's the moment when we lock foundations to never return to them.
Because in gameplay animation, you can't "patch" a bad technical system mid-course.
You can only live with it throughout production.
And sometimes, like for my client 3 months from release, we realize too late that the foundations were shaky.
At that stage, we can only apply band-aids.
Download the Technical Validation Checklist (Pipeline category)
In an upcoming article, I'll address key strategic preproduction decisions (scope, team...), those that profoundly influence a project's stability and coherence.
-----------------------------------------------------------------------------------------------------------

AniMatch : The beta version is now open to testers : AniMatch
-----------------------------------------------------------------------------------------------------------



Comments