D365 DEEP DIVE

Your D365 Database Is Probably Bigger Than You Think

Every D365 F&O tenant gets a storage allotment included with licensing — typically 10GB base plus incremental capacity per license. For a mid-market company, that might land around 40GB.

What most people don’t realize: everything over that allotment costs $40/GB/month. And a lot of what’s pushing you over isn’t your business data.

It’s auto-generated indexes you didn’t ask for. Staging tables from integrations that ran years ago. Database logging that someone configured too broadly at go-live and never revisited.

Three Things Silently Growing Your Database

1. Microsoft is adding indexes — without telling you.

There’s a “watchdog” service running on production that auto-creates indexes when it detects slow queries. These indexes are prefixed with WDMI and every single one consumes storage that counts against your allotment. No notification. No approval. No email saying “hey, we just added 3GB of indexes to your TaxTrans table.”

In many environments, the index size on tables like TaxTrans is larger than the data itself. WDMI indexes stack on top of your standard indexes and any ISV indexes. The cycle looks like this:

Slow queries → auto-indexes → bigger database → higher bill → nobody knows it happened.

Here’s what makes this especially frustrating: each WDMI index is a signal that something should be fixed in X++ or at the AOT level. A missing index definition. A poorly constructed query. A report that’s doing a full table scan. Instead of fixing the root cause, you’re paying $40/GB/month for the workaround.

2. Staging tables that served their purpose — years ago.

Custom integrations write to staging tables. Data gets consumed in seconds. The records stay forever.

I worked with a manufacturer that had three integrations — MES production actuals, a 3PL warehouse sync, and an AP invoice import. Four years of operation. Zero cleanup. 60GB of processed staging data still sitting in the database, plus another 15-20GB of WDMI indexes the watchdog had added on top of those bloated tables.

DMF staging tables do the same thing. Every data management import and export leaves records in the staging tables. They compound silently. Nobody thinks about them because they’re not “business data” — they’re infrastructure. But they count against your storage just the same.

The pattern is always the same: the integration developer built the inbound flow and the processing logic, but nobody built the cleanup job. It wasn’t in the SOW. It wasn’t in the user stories. So the data just sits there.

3. Database logging configured too broadly and never revisited.

Someone enables logging on 40+ tables during implementation. Maybe it was a compliance requirement. Maybe it was an auditor’s request. Maybe someone just checked a lot of boxes during a go-live prep meeting and nobody pushed back.

Three years later: 800 million rows in the DatabaseLog table. 150+ GB. For records nobody has ever queried.

D365 already maintains history through its posting framework. Subledger journals, inventory transactions, ledger entries — these all have their own audit trails built into the application. Database logging should supplement that, not duplicate it. If you’re logging every update to InventTrans and also relying on the inventory transaction history that D365 maintains natively, you’re paying for the same information twice.

Why This Matters Beyond the Bill

The storage cost is the obvious problem. But there’s a bigger operational issue that most teams don’t see coming until it hits them.

When your production database is over 50GB, you can’t export a bacpac. That means no production data in dev. You can’t reproduce bugs against real data. You can’t validate upgrades with real transaction volumes. You can’t performance test against anything meaningful.

That’s not a cost problem — that’s a delivery problem. Your development team is flying blind, testing against sanitized subsets that don’t reflect the real system. And when something breaks in production that they can’t reproduce in sandbox, the troubleshooting time multiplies.

I’ve seen teams spend weeks chasing intermittent batch failures that only happened with production-scale data volumes. If they’d been able to work with a real copy of production, they’d have found it in hours.

What to Do About It

Know where you stand. Go to Power Platform Admin Center → Licensing → Capacity add-ons. If you’re under your allotment today, manage growth before you’re not. If you’re already over, you’re bleeding money every month on data that probably shouldn’t be there.

Audit your WDMI indexes. Refresh production to a sandbox environment, get JIT access, and query for non-application indexes. Each one is a signal. Fix them in code — add proper indexes at the AOT level so the watchdog doesn’t need to compensate.

Schedule recurring cleanups. Batch history, DMF staging, database logging — these need monthly cleanup jobs at minimum. This isn’t optional maintenance. It’s the cost of running the system.

Build cleanup into every custom table. If you built the integration, the cleanup job ships with it. Period. This should be a standard deliverable in every SOW that includes custom data entities or staging tables. If your implementation partner delivered an integration without a cleanup job, they delivered an incomplete solution.

Narrow your database logging. Pull the list of tables with logging enabled. Cross-reference against what’s actually been queried in the last 12 months. High-volume table that nobody’s looked at? Turn it off. You can always re-enable it if a real need surfaces.

The System Creates Data. Nobody Manages It.

D365 was designed to create data. Transaction posting, integration processing, batch execution — the system is optimized for throughput. What it doesn’t have is a built-in philosophy around data lifecycle. There’s no dashboard that says “your staging tables are 60GB and growing.” There’s no alert that says “the watchdog added 4 indexes to TaxTrans last month.”

And Microsoft’s own automation is quietly adding to the pile.

At $40/GB/month for overage, it adds up fast. A 100GB overage — which isn’t unusual for a company that’s been live for 3-4 years without active management — is $48,000 a year. For data nobody needs.

Know where your storage stands. Audit what’s actually in there. And build the cleanup processes that should have been there from day one.


This is the kind of hidden cost I help clients uncover at 3J Advisory. If your D365 storage bill doesn’t look right, let’s dig in.