Why Your Post-Go-Live Is Harder Than Your Go-Live
Most D365 implementations don’t fail at go-live.
They fail six months later.
That’s when the implementation partner’s team rolls off. The project manager moves on. The architect who made all the key decisions is three projects away and doesn’t remember your number sequence configuration. And your internal team — the people who have to live with this system every day — inherits something they didn’t design, full of decisions they don’t fully understand.
I’ve seen this pattern at every scale. Small manufacturers. Fortune 50 retailers. The timeline varies, but the story is always the same.
The Honeymoon Ends Fast
Go-live gets all the attention. There’s a war room. Everyone’s watching. The cutover checklist gets executed with military precision. Data migrates. Integrations fire. Users log in. Someone sends a congratulatory email.
And then everyone relaxes.
But here’s what’s actually happening: you tested with a fraction of your real transaction volume. Your batch jobs ran against a dataset that was a clean subset of production. Your integrations handled the happy path because that’s what UAT covered. The edge cases — the ones that matter — haven’t hit yet.
They will. Usually around week three.
What Actually Breaks
After 13 years in D365 F&O, I can tell you the post-go-live issues fall into predictable categories:
Batch processing under real load. That inventory closing job that ran in 20 minutes during testing? It’s going to take 4 hours against your full production dataset. And it’s going to lock tables that your warehouse team needs. Nobody tested this because nobody had a full year of production data in the test environment.
Integration timeouts. Your dual-write sync worked perfectly with 500 records. Now it’s processing 50,000 records during peak hours and your middleware is throttling. The retry logic that “worked in test” is creating duplicate records because nobody handled idempotency properly.
Data entity surprises. The recurring data jobs that import your daily feeds start throwing cryptic staging table errors. The composite entity that worked in UAT has a field mapping that breaks on a specific combination of dimension values that only exists in production data.
Number sequence gaps and locks. Under concurrent load, your number sequences start gapping or — worse — deadlocking. Finance can’t post journals during peak processing windows. This one is almost always a configuration issue that was invisible at test-level concurrency.
Security and workflow bottlenecks. The approval workflows that seemed fine with 10 test users collapse when 200 real users hit them simultaneously. Someone configured a workflow with a hard-coded user assignment instead of a role, and that person is now the bottleneck for every purchase order.
Why the Implementation Partner Can’t Fix This
Here’s the uncomfortable truth: the team that built your system is often not the right team to stabilize it.
Implementation projects are scoped, budgeted, and staffed for delivery. The partner’s incentive is to hit go-live and move to the next engagement. The consultants who know your system best are already being pulled toward their next project. The ones who stay for “hypercare” are often the junior resources who did the configuration work, not the architects who made the design decisions.
Post-go-live requires a different skill set. You need someone who can:
- Read the batch job execution logs and understand why
InventCostClosingis deadlocking againstInventTransPosting - Trace a data entity failure through the staging tables, the target mapping, and the entity’s
postLoadmethod to find the actual root cause - Optimize a SQL execution plan without breaking the X++ query that generates it
- Refactor a chain-of-command extension that the original developer wrote without understanding the base method’s transaction scope
This is debugging and optimization work. It’s not glamorous. It doesn’t fit neatly into a SOW. But it’s the work that determines whether your D365 investment actually pays off.
What Good Post-Go-Live Support Looks Like
If I could design the ideal post-go-live engagement, here’s what it would include:
A technical assessment in the first two weeks. Not a slide deck review — an actual review of the batch job configurations, the integration patterns, the data entity mappings, and the customization layer. Identify the time bombs before they go off.
Batch processing optimization. Profile every batch job that runs during business hours. Identify the ones that are going to break under load and fix them proactively. This usually means refactoring the job’s query patterns, adjusting the task bundling, or restructuring the recurrence schedule.
Integration hardening. Add proper retry logic with idempotency checks. Implement dead-letter handling so failed messages don’t disappear. Set up monitoring that tells you when an integration is degrading before it fails completely.
A knowledge transfer that’s actually useful. Not a 200-page document that nobody reads. Targeted sessions with your internal team on the specific areas they’ll need to maintain. Focused on “here’s what will break and here’s how to fix it” rather than “here’s how the system works in theory.”
The Real Cost of Waiting
Every week you delay addressing post-go-live technical debt, it compounds. The batch job that’s slow today will be slower next month with more data. The integration that’s mostly working will create a data quality issue that takes weeks to clean up. The security configuration that’s “good enough” will fail an audit.
I’ve worked with companies that waited 12 months to address post-go-live issues. By that point, the technical debt was so deeply embedded that fixing it required a mini-reimplementation. The cost was 3-4x what it would have been if they’d addressed it in the first 90 days.
If your go-live was “successful” but your team is drowning — that’s normal. And it’s fixable. But the window for fixing it efficiently is shorter than you think.
This is the work I do at 3J Advisory. If your D365 post-go-live needs stabilization, I’d be happy to talk through what you’re seeing.