Materiality blog
SustainabilityBlog
02/03/202610 mins read

The materiality trap: when process crowds out priorities

Hannah Nascimento
Hannah Nascimento
Sustainability Director

You’ve followed the process. Scored the issues. Run the stakeholder survey. And yet, you’re still drowning in topics. The business nods politely but doesn’t change how it operates. Leaders see the matrix but don’t know what to do with it. And when someone asks what actually matters most, the answer is… everything, apparently.

The problem isn’t that you did it wrong. It’s that materiality, as typically run, doesn’t deliver as much value as it could. The process is compliant. The output is a matrix or a list. But the assessment hasn’t created the clarity, consensus or momentum it was supposed to.

If this sounds familiar, the issue probably isn’t methodology. It’s something deeper.

Why materiality doesn’t deliver

  1. Misalignment before the process starts. Different functions bring different lenses to materiality. Sustainability teams lean toward impacts. Risk leans toward financial exposure. Legal and compliance focus on obligations and controls. Strategy focuses on value creation. And the rest of the business is focused on quarterly numbers.

Three things tend to get in the way.

Without alignment up front – on definitions, thresholds and what the assessment is supposed to produce – the process becomes a series of parallel conversations that never converge.

Sustainability sees everything as material. Risk sees almost nothing. Compliance recognises only what’s mandatory. And often, key functions aren’t even part of the process – so their perspectives only surface later, when the results are challenged or ignored.

The result is often a long list of topics that lacks a shared view of what genuinely matters. And when everything is material, how does anyone work out what’s actually important?

  1. Scoring and engagement become over-engineered. There’s a temptation to add rigour through complexity: more scoring dimensions, broader surveys, finer gradations of impact and likelihood. The process can quickly start to feel like a burden.

Regulatory best practice expects you to assess impacts across dimensions like severity, scale, likelihood and irremediability, and that’s appropriate. These lenses matter. But problems arise when the scoring becomes an end in itself: multiple dimensions multiplied across dozens of sub-topics, decimal-point precision, elaborate weighting systems. The methodology grows more complex, becomes burdensome, and counterparts begin to lose interest. From an engagement perspective, this can easily become a dead zone, and the output still doesn’t get any more useful.

The difference between a 3.2 and a 3.4 isn’t real. And when the focus shifts from ‘what matters’ to ‘how do we score this’, the process loses sight of its purpose, and you lose the opportunity to gather rich insights to guide decisions.

The goal isn’t to abandon structure. It’s to use these dimensions to guide judgement, not replace it – and to keep the methodology proportionate to what the business can actually act on.

Stakeholder engagement has a similar trap. Broad surveys can give too much weight to voices that lack the context, business understanding or data to score meaningfully. And when engagement is designed to demonstrate breadth rather than surface insight, it generates volume without value.

The real rigour is in the quality of the conversations: who’s in the room, what evidence they’re drawing on, how disagreements get resolved. Over-engineering the mechanics doesn’t compensate for under-investing in the dialogue.

  1. The assessment stops at the matrix. Most organisations produce a list and call it done. But this only shows what was scored high or low. It doesn’t explain why it matters, what the business should do differently, or who owns what happens next.

Without the work that comes after – segmenting topics, clarifying which represent risks and which represent opportunities, understanding what each topic impacts, connecting findings to strategy – the matrix becomes a reporting artefact. It ticks the disclosure box and then sits in a drawer.

And here’s the practical reality: no organisation can act on 20+ material topics simultaneously. Businesses have finite resources, competing priorities and limited capacity for change. Materiality should help them focus on where value is at risk, and where value can be created. If the assessment doesn’t enable that focus, it hasn’t finished its job.

What to do instead

The fix isn’t a better methodology. It’s a different focus. These are the three things that most assessments underweight or skip entirely.

  1. Before the scoring: build alignment. The most important work happens before anyone scores anything. Bring together sustainability, risk, finance, legal, compliance and strategy to agree definitions up front. What do you mean by ‘material’? By ‘impact’? By ‘financially material risk’? What constitutes ‘an opportunity’?

Start with a working session – not a presentation – that brings the key functions together before any scoring begins. Use it to surface how each group currently thinks about materiality, where the definitions differ and what thresholds would feel meaningful. The goal isn’t immediate consensus. It’s making the differences visible so they can be resolved deliberately, not discovered later when the results don’t land.

Ideally, your materiality thresholds should connect to how the organisation already thinks and talks about strategy, risk and the enterprise risk framework, if one exists. But many ESG topics involve intangible risks or consequences that play out over long time horizons, and these don’t always fit neatly into existing risk language. That’s fine. The goal is consistency, not sophistication. Thresholds need to be shared, understood and consistently applied – even if they’re simpler than a formal ERM system.

Documenting this alignment work matters. Clear rationale for your definitions and thresholds is what makes the assessment defensible when regulators or auditors ask how you reached your conclusions.

This alignment step prevents the outcome where every function views materiality, and the results, differently because no one agreed on the rules. It also makes the final results defensible: you can explain not just what scored high, but why, using language the business already understands.

  1. During the process: prioritise structured judgement. The best assessments don’t rely on a single data source or a single lens. They triangulate: impact (from sustainability); financial exposure (from risk and finance); regulatory and compliance implications. When two or more lenses converge on the same topic, you have strong justification. When they diverge, you have a conversation worth having.

Stakeholder engagement should surface insights you wouldn’t get from internal analysis alone. Unless speaking to specific stakeholders or experts, it should not be used to generate scores. Design it to challenge assumptions, reveal blind spots and add context to your internal assessment. Hear from those most affected and most knowledgeable, weight their input based on proximity and vulnerability, and treat contradictions as data worth exploring rather than noise to smooth over.

With this input in hand, get specific. Material topics are often broad – climate, circularity, human rights. But not all aspects of a topic will be equally significant. Breaking topics into sub-topics helps you identify which parts actually matter for your business model: which parts of circularity – product design, packaging or waste in operations?

This specificity sharpens prioritisation as you’re focusing on the aspects that are genuinely significant, not treating everything under a broad heading as equally urgent. And it makes the assessment more useful for the business.

‘Circularity is material’ doesn’t tell procurement or product teams what to do. ‘Resource scarcity and circular materials’ does.

To be clear: a material topic still needs to be disclosed, even where your ability to influence is limited. But the business response should be proportionate. Where you have significant impact and real leverage, that’s where action and investment should focus. Where impact exists but influence is limited, the response might be engagement, collaboration or transparency about constraints. Sub-topics help you make these distinctions and then defend them.

Lastly, build in a cross-functional challenge session. This is often the step that gets compressed or skipped, but it’s where the real value is created. Bring together senior voices from across sustainability, risk, finance, legal and the business to debate the findings, surface trade-offs and make decisions together.

Document the rationale. It’s a working session where the shared view gets forged and priorities are shaped.

  1. After the matrix: translate into action. The matrix or list is a starting point, not an end point. Material topics need to be segmented: which are risks, which are opportunities? What does each topic impact – reputation, operations, revenue, licence to operate? Where is value being protected and where can it be created?

A harder test

If you want to know whether your materiality assessment is working, don’t ask whether it’s compliant. Ask whether it did its job:

  • Did the assessment create genuine consensus? Can you point to a moment where sustainability and finance disagreed, debated the issue and reached a shared position with documented rationale? If the process avoided conflict rather than resolving it, the alignment isn’t real.
  • Can you defend what’s out as clearly as what’s in? Auditors and stakeholders will ask why certain topics didn’t make the cut. If the answer is ‘it scored lower’ without a clear explanation of thresholds, evidence and judgement, the assessment isn’t assurance-ready. The discipline of exclusion is as important as the discipline of inclusion.
  • Do your material topics feel like a genuine prioritisation – or a long list with tiers? If you ended up with 20+ material topics and everything still feels equally urgent, the process avoided the hard choices. Prioritisation means saying some things matter more than others. If the assessment couldn’t do that, it didn’t finish its job.

Refresh or restart?

This is the question we hear most often: do we need to tear it up and start again, or can we build on what we have?

The honest answer depends on whether the foundations exist. If your previous assessment achieved genuine cross-functional alignment, then a refresh can build on that foundation.

But if the original process was run by sustainability alone, or if the alignment work never happened, there’s no foundation to build on. You’re not refreshing an assessment. You’re doing one properly for the first time.

Either way, the goal is the same: an assessment that creates consensus, enables prioritisation and gives the business something it can act on.

The real point

Materiality done well creates a shared, enterprise-wide understanding of what matters most and helps define what the organisation needs to do next. It strengthens alignment across sustainability, risk, finance and the business, and it builds the internal legitimacy that makes action possible.

 

Done poorly, it becomes an exercise that satisfies nobody. A process that produces a matrix, ticks a compliance box and changes nothing.

 

If your materiality assessment isn’t delivering as much value as it should, Lumina Materiality can help. We deliver focused priorities, defensible outcomes and results the business can actually use. Contact Hannah for more information. 

Loading...