AI has existed for some time already (since 1956 officially, to be exact), but modern LLMs have made it accessible to regular individuals. A key competitive advantage tool, it’s now being increasingly adopted by firms, especially professional services ones (either explicitly, as a strategic business move, or silently, by employees, if a company takes a stand against AI).
AI is pervasive – its implementation will only continue. Current industry leaders (big tech companies) will most likely stay and grow their positions, getting closer to profitability. The number of AI-based solutions will keep rising, they’ll become more diverse and tailored to specific requirements, and, hopefully, more accurate (not all at once, certainly). And since we are all here – in the same AI boat – we should learn how to drive it.
In consulting, AI has already become an integral part of the working process – most consultants (can’t speak for all) use LLMs daily. Some succeed to do that responsibly and for good, but some do not and several problems become apparent: the content becomes more hallucinated, there’s less fact-checking and personal input (it may happen that consultants don’t understand what they present), security issues arise. How should consulting companies, their managers and employees and clients handle this new world?
Let’s break down what each stakeholder actually controls and what they can demand from others.
1. Clients
At the end of the day, clients are the ultimate stakeholders in consulting engagements. It’s critical for them to agree internally on what they pay for – and then define their expectations about AI use accordingly:
If output quality – contracts can specify validation requirements, quality checks, or penalty clauses if faulty data is delivered (undermines "it's only wrong if you catch it" logic).
If human hours – clients might require from vendors at least basic acknowledgment of LLM usage (the level up: disclose which tools were used, how and where, how outputs were validated, and whether confidential data entered external platforms).
If trust and peace of mind – both parties better agree on ethical LLM usage guidelines, data handling protocols, and verification standards before work begins.
An important caveat! Clients might prefer the traditional way of doing things just to be on the safe side (which is perfectly understandable, considering how much is going on in the market). It won’t be easy then to convince a client that AI serves their benefit, regardless of them being aware of the full potential of AI (and the fact that leading firms invest heavily in such capabilities aiming for better insights, not just trying to cut corners).
Either way, clients gain from clearly framing their requirements from the outset.
2. Consulting companies
For consulting firms, AI governance is no longer optional – especially as contracts begin including LLM usage clauses. Those must cascade to delivery teams through project guidelines, NDAs, and corporate policies. Consider the following concerns for starters:
Does putting client data into AI violate confidentiality agreements?
If deliverables contain faulty data, does that equal damaging client property – or ‘just’ incompetence?
If a consultant can’t explain the data in their deck, is that acceptable?
Encouraging employees to treat AI responsibly isn’t enough. The very least each firm should do is to embed AI policies and training at scale (one-offs won’t work). Equally important, proper usage should be supported – malpractice monitored and penalized (when nothing’s at stake, why would consultants choose the harder path?). Direct stipulation in employment / freelance contracts or indirect reference to internal policies is one of the ways to enforce what’s right.
Consultancies therefore should not only adopt LLMs transparently, but make the process ethical and verifiable (even when clients are skeptical).
3. Managers
In most cases, AI-generated content looks credible: numbers seem reasonable, sources are listed, storylines flow. Except the numbers aren’t real or don’t add up, sources don’t contain the stated information, and facts lack context – but now those are harder to catch (compared to traditional consultant mistakes), carry more risk and more shame when revealed.
Managers’ role is crucial:
have deep knowledge about what constitutes responsible LLM practice – and what doesn’t,
ensure team members understand expectations about validation and disclosure,
be consistent in verifying the results (more precautionary time needs to be built into project scopes, especially with juniors),
and lead by example – managers can’t enforce standards they don’t follow themselves.
Ultimately, LLM governance must be addressed at the organizational level – this is top priority for firm leadership, not something individual managers can solve in isolation.
4. Employees
Cutting corners with LLMs might feel efficient initially: work is done, managers are happy – until consequences hit. In the short term, it can cost jobs and even careers. In the long run, employees risk losing their competencies while being gradually replaced by AI – which is not a distant horror story anymore.
It’s therefore worth defining the skills that make you irreplaceable and cultivating them (at the very least, somebody will still have to set goals for AI tools, choose which ones to use, and control their outputs). In consulting specifically, one needs to know what is right and what drives better client results – that’s the core human value proposition. Firm management aims for the same thing (that's the entire point of a service business – beyond profitability), so being good at what you're meant to do matters.
It can happen that managers make unreasonable demands though (e.g., prohibiting LLM use for tasks that could be done 10x faster) – better communicate it rationally. After all, there is a huge difference between responsible use and evading responsibility: "I built tool X to achieve result Y (legally) with benefit Z for the client" vs. "I didn't have time to do it the normal way and didn’t check the output”.
That said, each stakeholder should, first, understand for themselves implications (of LLMs changing the way consulting work gets done) and, second, take precautionary measures, i.e. clients define what they’re buying and what LLM usage they’ll accept; firms embed AI governance through policies, training, culture, and contracts; managers hold greater responsibility and deepen their AI knowledge; employees prioritize competence over shortcuts.
So should AI be part of consulting contracts? Yes, though to what extent remains subject to individual stakeholder judgment. Let’s start with establishing clear LLM terms in client-consultant agreements and building comprehensive internal governance – and then, who knows, as these practices mature across firms, they may eventually shape national AI regulatory frameworks.