Summary
A Power BI slow refresh that crawls past 30 minutes – paired with a PBIX file ballooning past 2 GB and three reports producing three different numbers for the same metric – is almost never a DAX performance problem. It’s business logic living in the wrong layer. This case study walks through how we refactored a retail analytics client’s Power BI model by pushing grain changes, joins, and canonical metric definitions upstream into a dbt model: 2.1 GB dropped to 180 MB, a 38-minute refresh fell to roughly 3 minutes, and three Net Revenue definitions collapsed into one.
If your Power BI Slow Refresh issue keeps climbing past 30 minutes, three reports are producing three different numbers for “Net Revenue,” and your PBIX file has crossed 2 GB – the problem is almost certainly not your DAX. It’s where your business logic lives.
This pattern doesn’t just make Power BI slow. It quietly destroys trust in every number the business looks at. When the CFO, the sales lead, and the ops team each pull a different Net Revenue figure from the same semantic model, the problem stops being technical – it becomes political.
These were the exact symptoms a retail analytics team was dealing with when they brought in our Power BI consulting team. A flagship Sales Performance report had accumulated 47 measures, 12 calculated columns, 9 merged Power Query queries, and a 38-minute refresh – when it didn’t time out entirely.
The root cause wasn’t bad DAX. It was a cultural default: every new requirement became another measure, another merge, or another calculated column inside the PBIX. Power BI made it frictionless to say “yes” – and impossible to ask “where does this belong?”
This case study walks through how we diagnosed the problem, the layered framework we applied, and the refactor that cut the model from 2.1 GB to 180 MB while dropping refresh time by roughly 92%. The numbers are specific; the pattern is one almost every growing BI team hits.
Why Your Power BI Refresh Is Slow (and Getting Slower)
Most of these warning signs look like normal Power BI life at first – which is exactly why they’re easy to miss. Here’s how we reframed each one during the audit.

Every one of these is a signal that logic has collapsed two or three layers to the right of where it should live. Scroll through any Power BI Community troubleshooting thread on slow refresh and you’ll see the same four symptoms show up over and over.
Where Should Business Logic Live in a Power BI Architecture?
Business logic in a modern BI stack can live in any of four places – and the placement decision dictates performance, governance, and trust. Microsoft’s own star-schema guidance for Power BI makes the same point about grain and relationships, but teams consistently underestimate how quickly that discipline erodes in practice.

The farther left, the more canonical the definition. The farther right, the faster it ships – and the faster it drifts.
Each layer has a purpose:
- Warehouse /dbt – grain changes, cross-system joins, canonical business entities, FX/tax/unit conversions.
- Semantic model –relationships, role-playing dimensions, certified metrics reused across reports.
- DAX measures –report-specific slicing, time intelligence, presentation math.
- Visual filters –interactive, user-driven, ephemeral.
The rule we adopted – originally formalized in Roche’s Maxim of Data Transformation – is this: push logic as far left as it can reasonably live without blocking analyst self-service. This is the same principle we apply across all our cloud analytics engagements, whether the target warehouse is Snowflake, BigQuery, Databricks, or Fabric.”
That one sentence resolves 90% of the “where does this belong?” arguments in under 30 seconds.
Fixing the Power BI Slow Refresh: From DAX to a dbt Model
Net Revenue was defined as gross sales minus returns, promotional discounts, and loyalty redemptions, with specific handling for cross-border orders and inter-company transfers. Before the refactor, all of that logic lived inside a single DAX measure.
What follows is the actual measure we inherited. Read past the syntax and look at how many layers of logic are stacked inside one expression – a join, a filter, a grain change, and two derived measures, all evaluated inside Power BI on every visual render. This is exactly the pattern Microsoft’s own DAX performance guidance warns against.

Four things were wrong with this measure, in order of severity:
The critical problem: RELATED(Products[UnitPrice]) was re-evaluating across 180M rows on every visual render. This is what drove refresh time from acceptable to timeout-prone. SQLBI has an excellent deep-dive on joining tables in DAX that explains exactly why this pattern is so destructive at scale. (We wrote about a similar pattern we’ve seen in Spark data skew on Azure Synapse – the failure surface is different, but the root cause is the same: joins happening at query time instead of load time.)
The quiet problem: Qty × UnitPrice was duplicated in two other reports – with a rounding bug in one. Three Net Revenue numbers in circulation, all technically “correct.”
The governance problem: Cross-border FX conversion was happening in Power Query via a merged currency table, with no audit trail and no way to reproduce historical rates.
The trust problem: The business definition was invisible to anyone without PBIX edit access. Finance couldn’t validate it. Data engineering couldn’t test it.
The fix was to push the calculation all the way to the warehouse as a dbt fact model, built and maintained by our data engineering team:

The DAX measure now collapses to a single line:

Why this works
Instead of being re-evaluated on every visual render, the join is pre-materialized once per refresh at the order-line grain, ensuring faster Power BI performance. Additionally, FX conversion, returns approval logic, and cancellation filters are centralized in a single SQL file that is code-reviewed, version-controlled, and fully queryable outside Power BI. As a result, multiple reports now deliver consistent metrics-because they all rely on the same standardized column.
Feeling stuck with a bloated Power BI model? Start by booking a free 30-minute Power BI architecture review with our team at ScriptsHub Technologies. During this session, we will thoroughly audit your Power BI model using a proven four-layer framework. Then, we’ll clearly identify which measures, calculated columns, and Power Query merges should be moved upstream to improve performance, scalability, and data model efficiency.
Book a Power BI architecture review →
Before and After: What Pushing Logic Upstream Delivered
Three months after the refactor went live, the measured impact:

The most surprising outcome wasn’t the performance win – it was cultural. Once there was a visible framework for where things belong, the team stopped defaulting to “add another measure.” Requests now arrive pre-classified: “this is a grain change, it needs dbt,” or “this is just YoY, I’ll add the measure myself.”
When to Push Power BI Logic Upstream: 7 Red Flags
Use this as a review checklist. Any one of these signals that logic has drifted into the wrong layer:
- Calculated columns doing joins (RELATED, LOOKUPVALUE across large tables)
What it signals: The join belongs in a dbt model, not in the semantic layer.
- Power Query merges on tables over 1M rows
What it signals: The merge belongs in a warehouse view or dbt model.
- The same business definition appearing in 3+ reports
What it signals: Promote it to a certified semantic-model metric.
- SUMX(FILTER(…)) patterns over entire fact tables
What it signals: The fact grain is wrong – fix it upstream.
- FX, tax, or unit conversions computed in DAX
What it signals: Belongs in the ELT layer, not the report layer.
- Hardcoded lists of SKUs, regions, or account IDs inside measures
What it signals: A dimension table is missing – add one in the warehouse.
- Power BI slow refresh over 10 minutes with under 50M rows
What it signals: Power BI is computing things it shouldn’t be.
Most of these trace back to a single root cause: fact-table grain chosen for convenience rather than correctness. Ralph Kimball’s dimensional-modeling techniques have been teaching teams the fix for 30 years – it just happens to look different when the failure surfaces in Power BI instead of a data warehouse.
Recognize three or more? That’s a 30-minute Power BI architecture review conversation with our team.
How to Push Power BI Logic Upstream Without Breaking Self-Service
The refactor playbook our team applies on every engagement:
- Inventory every calculated column, Power Query merge, and measure thatcontainsa business definition (not just presentation math).
- Classify each item into one of the four layers using the rule above. Write the classification in a spreadsheet-make the decision visible.
- Migrate grain-level facts first. Build thedbtmodel, certify it, and point the semantic model at it. Don’t delete the DAX measure yet.
- Shadow-test the new number against the old for one full refresh cycle. Retire the DAX version only when they match.
- Set a review gate. No new measure ships without answering: “Could this live upstream?”
The 30-second rule: if the logic changes a fact’s grain, crosses systems, or will be reused in more than one report – it belongs in dbt, not DAX
The Honest Tradeoffs of Pushing Power BI Logic Upstream
Pushing logic upstream is not free. A realistic view of the tradeoffs:

The last row is the one we’re most honest about with clients: this architecture assumes a data engineering function with capacity. If your warehouse team is already overloaded, pushing more logic upstream will make things worse before it makes them better. For teams already dealing with slow Power BI refreshes beyond the 500K-row mark, these trade-offs typically pay off within one quarter. For teams with under 1M rows and a single-report footprint, the DAX-first approach is genuinely fine – until it isn’t.
Power BI Business Logic: Knowing When to Push It Upstream
Every layer of the modern BI stack has a purpose. Gravity pulls logic toward the report because Power BI makes “add another measure” frictionless. Governance has to pull it back.
The three rules we now apply on every Power BI engagement:
- Grain changes belong upstream. Always. Ifyou’re reshaping fact tables in Power Query, stop.
- Certified metrics belonginthe semantic model. One definition, reused everywhere.
- DAX is for presentation math, not business logic. Time intelligence and ratios-yes. Revenue definitions – no.
One honest test. If you can’t point to the one place your Net Revenue definition lives, you don’t have one – you have three, and the business is paying for all of them.
Power BI slow refresh isn’t a DAX problem – it’s a modeling problem caused by business logic living in the wrong layer, leading to slow performance, inconsistent metrics, and bloated PBIX files; the fix is to push joins, grain changes, and reusable definitions upstream to improve Power BI performance, reduce PBIX size, and create a single source of truth while keeping DAX focused on presentation. If your reports are slowing down or your numbers don’t match, “ScriptsHub Technologies will show you exactly why-simply send your PBIX file today and receive a precise, written audit within 48 hours, detailing what to fix, what to move upstream, and how much refresh time you can recover, all with zero calls and no back-and-forth.
Frequently Asked Questions (FAQ’s)
Q1. Where should business logic live – Power BI or the data warehouse?
Anything the CFO could dispute belongs in the data warehouse, while anything that only affects how metrics are sliced within a single report belongs in DAX; therefore, push logic as far upstream as reasonably possible.</p>
Q2. Why is my Power BI refresh so slow?</h3>
A slow Power BI refresh is rarely a DAX issue; more often, Power BI is handling work that belongs in the data warehouse—such as merges, joins, or improperly grained fact tables. For teams not ready to adopt dbt, phData’s Power BI Dataflows best practices covers an intermediate approach.
Q3. When should I use dbt instead of DAX for business metrics?</h3>
Use dbt when you want the same number to show up identically in Power BI, Tableau, and the CFO’s Excel export. Use DAX when the calculation only matters inside one report.</p>
Q4. How much can I realistically reduce PBIX file size by pushing logic upstream?
Depends on the cause. If calculated columns are materializing joins, expect 70-90% reductions. If you genuinely have hundreds of millions of raw rows, you need aggregations or incremental refresh – not upstream logic.</p>
Q5. Does the dbt Semantic Layer replace the need for a Power BI semantic model?
No-they’re complementary. The dbt Semantic Layer defines metrics once in SQL, making them auditable and consistent, while Power BI delivers those metrics through reports using DirectQuery, row-level security, and relationships. In practice, you need both.




