← Back to InsightsContext Engineering: The New Frontier of Variance Analysis
Guide3 min read

Context Engineering: The New Frontier of Variance Analysis

Every quarter, finance teams in shared service centres spend one to two days writing variance commentary for board reports. The output is often generic: 'Revenue was below budget due to lower volumes.' The analyst knows this is weak. The CFO knows this is weak. But cross-referencing budget assumptions, prior period explanations, and regional patterns across a dozen documents takes time nobody has during close.

The setup: building a context library

Load these documents into an AI research tool such as NotebookLM:

  • The budget memo and its underlying assumptions
  • Prior period variance explanations accepted by the CFO
  • Regional and seasonal pattern documentation
  • Business unit cost driver commentary
  • Two or three examples of CFO-approved commentary - what 'good' looks like

This takes about thirty minutes the first time. After that, you update incrementally each quarter.

The process: feeding actuals

Drop in the quarterly actuals. Instead of 'DACH region underperformed,' you get 'DACH region volume shortfall of -12%, consistent with Q4 assumptions memo - new pricing structure was not reflected in the forecast model.' The AI classifies each variance by root cause: forecast methodology problem, execution problem, or timing difference.

The insight that changes the job

Same numbers, same AI model, same prompt. Give it thin context and you get generic commentary. Give it rich context and you get commentary that reads like it was written by someone who has been in the organisation for years. The differentiator is not the AI. It is the context you feed it.

Context engineering as a discipline

The controller's job shifts from writing commentary to curating the context library. Which documents go in? Which examples best represent the standard? Which assumptions are still current? The quality of AI output is a direct function of these curation decisions. A well-curated context library produces consistently strong output. A neglected one drifts toward the generic commentary you were trying to escape.

Practical considerations

NotebookLM is free. No infrastructure, no IT project, no vendor evaluation. A finance team can start tomorrow with documents they already have. Documents stay within the tool's workspace - nothing shared externally.

Each quarter the library gets richer. Approved commentary becomes a reference example for the next quarter. The AI output improves not because the model improved, but because the context did. That compounding effect is the real return.

Want to discuss your specific situation?

Every automation problem is different. A short conversation is usually enough to see what is possible.

Get in touch →