Skip to main content

AI at work: the implications nobody fully anticipated, and what we should do next.

Randstad Advisory Signals from the Edge: this is a dispatch from your colleagues in 2030



The rapid introduction of AI is causing silent, structural crises in the modern workplace that employers must overcome to make their tech and business strategies a success. The core question must shift from "What can AI do?" to "What happens to us when AI does it?" Unintended consequences include workers facing a new "second job" managing AI, leading to cognitive fatigue. Furthermore, AI use quietly erodes cognitive capabilities (synaptic pruning) and reduces the collective diversity of thought. Randstad Advisory urges leaders to act now, protecting crucial human skills and fixing the interrupted junior talent pipeline.

We introduced AI into work and watched things improve. 

Outputs got better, decisions got faster. People had more capacity. The numbers made sense. Leadership was confident. Everyone agreed it was the right direction.

So we kept going.

What we didn't do — and this is what we wish we could come back and tell you — is pay enough attention to what would quietly change underneath all of that.

The way people think about their work has started to shift. Not in ways that show up in any dashboard. But gradually, in the day-to-day texture of how people operate, something is different.

People are busier, not less busy. They are faster, but somehow more stretched. They are producing more, but in ways that are starting to look more similar to each other. The thinking that used to happen naturally — the wrestling with a problem, the building of an argument from scratch, the judgment call made on experience alone — is happening less. 

And when it is needed, it isn't always there in the way it used to be.

We also noticed that two people sitting next to each other, doing similar roles, using the same tools, can have completely different experiences of AI. One thrives. One struggles. And for the longest time nobody could explain why. Because nobody had thought to ask how personality, thinking style and working preferences shape the way people partner with AI.

And then there is this: AI is doing the work that used to require people to find each other — to pull in a colleague, talk it through and build something together. AI is doing it alone. Quietly. But what we didn't see is that these conversations have been doing something else entirely. 

They were building the shared understanding, the trust, the collective intelligence that make teams capable of more than the sum of their parts. When AI replaces that process, the output looks similar, but the team does not.

And the younger people coming into organizations — the ones who would have spent their first few years building the foundations of their craft, making mistakes, learning the “hard way” — are not getting the chance for these experiences. The analysis, the drafting, the research, the coordination — all the foundational tasks that used to justify hiring someone at the start of their careers are gone.

It seemed logical. Why hire a junior when AI could do it faster?

What no one asked was: Where do senior professionals come from? They come from the juniors who spent years doing that foundational work, building judgment, learning the craft. When we stop hiring them, we don't just lose the entry-level roles. We interrupt the pipeline that produced everyone above them.

By the time we understood this, the shortage was already here. By the time it felt urgent, it had already been going on for years. 

Now we are dealing with the consequences of a set of decisions that seemed perfectly reasonable when we made them, because we simply hadn't asked the right questions early enough.

We kept asking: What can AI do? We should have been asking: What happens to us when AI does what it can do? 

We just didn't know what we were looking at. What would it have taken to see it earlier?


we couldn't fully anticipate what we hadn't yet done 

There is a particular kind of consequence that only becomes visible once you are already inside the change. Not because the people involved were careless or the planning was poor, but because some things can only be learned by doing.

AI at work is, genuinely, something organizations have not done before. Not at this scale. Not at this speed. Not with this level of integration into how people think, judge and operate day to day.

The implications emerging now are not signs that something went wrong.Some of them carry the shape of things we already knew;, about change, about people, about how organizations absorb new ways of working. We should have seen them coming. With hindsight, the pattern was familiar.

Others are new. Specific to this technology, this moment, this particular meeting point between AI capability and human habit.

This article is about both: What we are seeing and what we should have seen coming. And, as a starting point, what you must start doing as a result.


some of this, we should've seen coming

Some of the implications emerging from AI adoption are not surprising when held against what we already know about people, change and organizations. We know that when significant change is introduced without bringing people on the journey, the change becomes harder. The adoption curve lengthens. The gap between what a tool can do and what an organization actually gets from it widens.

Conversely, we know that when people are genuinely involved from the beginning, the journey moves faster. Different people respond to change differently. A single communication, training session or rollout approach will never land the same way twice.

Even where organizations attempt a more inclusive, bottom-up approach, the inherent chaos of grassroots AI experimentation often triggers a defensive reaction. In many cases, early organic energy is quickly funneled into rigid governance models, turning what should have been a flavor-varied explosion of local innovation into something safely, predictably vanilla.

These are not new insights. They come from decades of change management, organizational psychology and lived experience. And yet, in AI rollouts across organizations of every size and sector, they are being bypassed. The technology is deployed. The people are told. And the gap between those two things is where many of today's implications were born. 

signals from the field

    • McKinsey's analysis of more than 200 at-scale AI transformations found that high-performing organizations invest at least twice as much in change management as in building the solution itself.
    • An LSE and Protiviti study of 3,000 workers across 30 countries found that training — not age, digital confidence or generational identity — is the single biggest predictor of how effectively someone uses AI. A trained Gen X employee outperforms an untrained Gen Z employee. 
    • Yet, the same study showed that 68% of all employees received no AI training in the past 12 months. The knowledge of what works exists. The application of it is not keeping pace.

 

implication 1: taking away work isn’t reducing it 

Take the most common AI intervention: automating or augmenting a portion of someone's role. Employers are hoping to remove the repetitive tasks, free up time and let people focus on higher-value work. The logic is clean, and the intention is good. 

But something unexpected is happening.

The work that was taken away does not simply leave a space. A new kind of work is appearing in its place: managing the AI, checking its outputs, prompting, refining, correcting, directing. The interface itself becomes a job. The agent requires oversight. The tool requires maintenance and interpretation. In effect there are two jobs where there used to be one. 

signals from the field

    • 96% of C-suite leaders expected AI to reduce workload. Yet, the LSE and Protiviti study found that training is the single biggest predictor of how effectively someone uses AI. 
    • According to Boston Consulting Group (BCG), there has been a 33% increase in decision fatigue among workers managing four or more AI tools simultaneously, alongside a 39% rise in major errors and a 39% increase in intent to quit. 
    • BCG researchers named the phenomenon "AI brain fry": the measurable cognitive fatigue experienced by workers managing multiple AI tools simultaneously. Productivity peaks at three tools. Beyond four, the numbers deteriorate sharply.

The assumption is: Give the work to AI, get time back. The reality, in many cases, is: Give the work to AI, get a new job managing it.


implication 2: the muscle you stop using quietly stops working 

There is a principle in neuroscience that is worth understanding before we go any further: The brain does not maintain capabilities it does not use.

Neural pathways that are exercised regularly become stronger, faster and more reliable. Neural pathways that go unused weaken; the connections thin, the signals slow, the capability that once felt automatic begins to require effort. And then, gradually, it begins to require more effort than it used to. It begins to produce worse results than it used to.

Neuroscientists call this “synaptic pruning.” The brain, in its efficiency, removes the connections it has decided are no longer needed. The less clinical version of the same idea is something most people already know intuitively: Use it or lose it.

This principle has always applied to physical capability. Muscles atrophy when they are not exercised. Cardiovascular fitness declines when it is not maintained. The body does not preserve what it is not asked to do.

Research is now confirming that this applies with equal force to cognitive capability.

AI, deployed without sufficient thought about which human capabilities must be actively protected, is triggering exactly this process — quietly and consistently in knowledge workers across industries and levels of seniority.

The mechanism is straightforward. When AI becomes the default for a particular kind of thinking, such as drafting, synthesizing, analyzing or diagnosing, the human brain no longer does that work. The synaptic connections associated with it begin to thin. The capability that once felt natural begins to feel challenging over time, and in ways that do not show up in any output metric until the capability is genuinely needed and falls short. It begins to erode.

What makes it dangerous accumulates in the background, invisible to the person it is happening to and to the organization measuring their outputs, until the moment it becomes consequential.

signals from the field

    • A study published in The Lancet Gastroenterology & Hepatology tracked 1,443 colonoscopies performed by 19 experienced endoscopists across four medical centers. Before AI implementation, the unassisted detection rate was 28.4%. After just three months of AI-assisted work, it dropped to 22.1%. The tool was working exactly as intended. The human capability was not.
    • A Microsoft Research and Carnegie Mellon University study of 319 knowledge workers found that higher confidence in AI correlates directly with less critical thinking.
    • An MIT Media Lab study tracked 54 adults over four months of regular AI use. Neural engagement dropped by 47% compared to unaided work. When AI access was removed, engagement did not recover quickly. The atrophy had set in.
    • A PNAS-published randomized controlled trial with approximately 1,000 high school students from the University of Pennsylvania found that those with unrestricted GPT-4 access solved 48% more problems during AI-assisted sessions, but scored 17% worse when AI access was removed.
    • Fabrizio Dell'Acqua’s  "Falling Asleep at the Wheel" experiment found that higher quality AI does not make people more careful. When the AI performs better, recruiters trust it more, apply less scrutiny and miss more errors. Paradoxically, the organizations using the most accurate AI tools are in some cases producing worse outcomes than those using less accurate tools, precisely because the inferior tools keep humans alert and engaged. The better the AI gets, the harder it becomes to stay awake inside the work.
    • Cognitive research suggests that AI assistants might accelerate skill decay among experts and hinder skill acquisition among learners. It might also prevent experts and learners from recognizing these effects.
    • A paper in Springer Nature frames it as a structural problem, not an individual one: AI creates "false expertise transitions" where apparent competence masks underlying knowledge gaps, a phenomenon already documented with measurable competency decline within months of adoption across medical, legal and cognitive domains.

This pattern has a precise historical parallel. In 1983, engineer Lisanne Bainbridge described what she called the “ironies of automation”: the more sophisticated automation becomes, the more demanding — not less — the human role within it. Automation causes operators' skills to atrophy through disuse. But when manual takeover is needed, something has usually gone wrong, exactly the moment requiring the greatest skill.

This is not an argument against AI at all. It is an argument for deciding, deliberately, which cognitive capabilities your organization must protect, and designing for that choice.

implication 3: not everyone uses AI the same way, and your organization doesn't know it 

The assumption behind most AI rollouts is that the tool is neutral, that it performs the same way for everyone who uses it, and that adoption is a matter of access and training. That assumption is not holding up.

People do not use AI in the same ways. Not because some are more capable or more motivated or know how to prompt AI, but because the way they naturally think shapes the way they naturally partner with AI.

Someone who processes ideas externally — whose default is to develop ideas through dialogue, to work something rough into something formed through exchange — will use AI as a live thinking partner.

Someone who processes ideas internally, on the other hand, might use AI differently: as a tool for synthesis or verification, rather than generative conversation.

Neither approach is wrong, but they produce different outputs, create different risks, require different guardrails and respond to different kinds of support. 

If your organization has deployed AI as a single standardized rollout, you have almost certainly inherited many different, unmanaged approaches without realizing it.

signals from the field

    • A 2025 PwC study across more than 1,000 workers in Ireland confirmed that openness to change predicts how comfortably someone engages with AI tools, and that this relationship is specific to AI, not to technology in general.
    • A separate study found that task quality is higher among individuals who actively challenge AI outputs rather than accepting them at face value. This suggests that high initial trust in AI, associated with agreeableness, turns out to be a liability in collaboration. Those most inclined to trust AI outputs are precisely the people most vulnerable to losing their own critical skills over time. 
    • A comprehensive review of 58 empirical studies in Electronic Markets established that personality traits significantly shape trust in AI systems, with openness, agreeableness and conscientiousness all correlating with higher initial trust.
    • Research from the Association for Computing Machinery confirms that analytical and intuitive thinkers process AI explanations differently. What clarifies for one person actively confuses another.

Personality is dictating how people partner with AI, not just the outcomes of the work, but the nature of the partnership itself. 

This raises questions most organizations have not yet begun to ask and answer:

    1. How do you know who in your organization is using AI in which ways? 
    2. How do you design guardrails that accelerate one person's learning without constraining another's? 
    3. How do you build a system that accommodates individual preferences while also knowing the limits of those preferences and avoiding natural style blind spots?

And then there is the layer beneath all of this that most organizations are not yet considering. When the tools provided do not fit how someone naturally works — when they are too slow, too limited, too controlled — people do not wait for a better solution to be approved. They find one, on a personal device, through a personal account and with a tool nobody sanctioned, knows about or can see. 

This is what is now being called “shadow AI” — the parallel system your workforce has already quietly built around your company’s official one.

It is invisible, untracked and entirely logical from the employee's perspective. And attempting to control it may not help: Attempting to ban it drives it further underground while changing nothing about the underlying reality.

What its existence is telling you is more useful than any governance policy you could put around it. Shadow AI does not emerge in organizations where people feel well equipped for the work they are being asked to do. It emerges in the gap between what was provided and what people actually need — between the rollout that was designed and the working reality it landed in.

That gap is the thing worth responding to.


implication 4: everyone gets sharper with AI, but collective diversity suffers 

This is the implication that is almost invisible in individual performance data and only becomes apparent when you look across the organization. AI demonstrably enhances individual performance. A Harvard Business School and BCG study of 758 consultants showed that, on tasks within AI's capability, individuals completed 12.2% more tasks, 25.1% faster, with 40% higher quality. The individual gains are real.

But the same study found that AI reduced collective diversity of thought by 41%. Everyone produced better work; however, the work became more similar. The capacity to generate genuinely different thinking, which was supposed to be its most durable competitive advantage, contracted.

signals from the field

    • A peer-reviewed study in Science Advances looked at creative writing with and without AI assistance. It found that AI-enabled work is significantly more similar across individuals than work produced without it. The authors described it as "a social dilemma: with generative AI, writers are individually better off, but collectively a narrower scope of novel content is produced."
    • A in ScienceDirect found the homogenization gap widens with scale. More alarmingly, when AI is withdrawn after extended use, individual creativity drops and content homogeneity continues to climb, even months later. The researchers call it a "creative scar." The atrophy of the individual voice persists after the tool is removed.

For organizations, the implication is structural. The diversity of thinking across a team is not just a cultural asset; it is how organizations generate the non-obvious solutions, the unexpected connections, the approaches that outperform what a competitor running the same AI systems will produce. If that contracts silently, it will not be visible in any individual's performance review.


we don't know what we don't know; that's the point 

Every major technology in history produced consequences nobody predicted. The internet democratized information; it also produced misinformation at a scale no one anticipated. Mobile technology connected the world; it also restructured attention, sleep and social behavior in ways still being understood. 

AI will be no different. There are already two measurable signals that are still not widely acted on:

1. the junior role crisis 

Early-career tech hiring postings across Europe are down 73%. Entry-level roles are disappearing as AI handles the work that used to develop the next generation of senior professionals. Stanford uncovered that, after GPT-4's release, employment of 22- to 25-year-olds in highly AI-exposed roles fell by approximately 13%, even as senior roles grew. Executives are publicly warning that agentic AI is hollowing out the junior developer pipeline. 

The short-term economics are compelling. The long-term consequence is a structural gap in experience, judgment and institutional knowledge that no AI will be able to fill.


2. institutional memory loss

An estimated 42% of organizational knowledge lives with individuals, not in systems. AI is trained on explicit, documented knowledge; it cannot capture the tacit judgment, contextual experience and hard-won understanding that experienced professionals carry. As those professionals leave and AI handles more of the daily cognitive work, the context that makes an organization distinctive begins to erode invisibly, until a crisis reveals it is gone. 

If every company uses the same AI systems, competitive differentiation will come from the quality of context those systems draw from. Most organizations are depleting that context faster than they are building it.

The organizations that will navigate emerging consequences well are not the ones with better predictions. They are the ones with better reflexes, using systematic ways of detecting what they did not anticipate, having cultures where signals are surfaced rather than suppressed.


what you can do about these challenges

  1. Involve people from the beginning.
    Telling people to adopt is not the same as bringing them on the journey. The organizations that include people early move faster, not slower. Adoption that is instructed rather than co-created creates resistance that is harder to reverse than the delay involvement would have caused.
  2. Design for different personas — and do it systemically.
    A single communication or training rollout will not land the same way twice. We must design for variation deliberately, using individual processes to create a sum greater than their parts. By moving away from rigid governance layers and toward real-time insights, we can achieve personalization at scale, allowing for agile tuning of support based on how different groups are actually interacting with the technology.
  3. Close the leadership-employee gap.
    96% of C-suite leaders expect AI to reduce workload. 77% of employees say it has added to theirs. That gap does not close on its own. It requires honest measurement and honest conversation.
  4. Model it, don't just mandate it.
    Leaders who are visibly inside the AI journey, e.g., using the tools, talking openly about what they are learning, coaching their people through it, move their organizations faster than those who sponsor the change from a distance. AI fluency does not cascade through a policy. It cascades through behavior. And behavior starts at the top.
  5. Account for the second job.
    Managing AI is not one task, it is a set of new tasks that did not exist before: defining the problem, deconstructing it, auditing the process, setting context and constraints, prompting, reviewing outputs, refining iteratively, validating results, managing risk and maintaining oversight. None of these appear in any job description. Build them into role design and performance conversations, before they become the invisible overload nobody can explain.
  6. Identify which cognitive capabilities must be protected.
    Decide deliberately which thinking skills your people need to keep exercising. Atrophy is quiet. Design for the capabilities you need to still be there in three years, not just the outputs you need today.
  7. Map how personality is shaping AI use in your organization.
    Understand the range of approaches your people are taking. Build guardrails that accelerate learning for different cognitive styles, rather than assuming one approach serves everyone. Know the limits of individual preferences as well as their strengths.
  8. Watch the collective, not just the individual.
    Individual performance metrics will not surface the homogenization of thinking. Measure diversity of approach, perspective and output at team and organizational levels — not just speed and quality per person.
  9. Protect the conditions for junior talent to develop real capability.
    Protect the conditions for early-career talent to develop real capability. Domain knowledge, wisdom and judgment come from lived experience, not from watching AI do the work. The short-term economics of removing entry-level roles are compelling. The long-term consequence is an organization without the people who know how things actually work.
  10. A genuine sensing mechanism for unintended consequences.
    Establish a genuine sensing mechanism for unintended consequences, not as a retrospective review, but as a live, ongoing practice of asking: "What are we seeing that we did not expect?" Surfacing these shifts early allows for a response while they are still manageable. This requires building a genuine capacity to see and hear what is actually happening, rather than what is wanted or expected. Raise up feelings, impact, actions, unknowns and don't just ask for this but make it part of your expectation.
  11. Psychological safety for naming what is going wrong.
    If people cannot surface concerns without it feeling like failure, the organization will only see consequences when they are already serious. Infosys and MIT Technology Review found that 83% of business leaders believe psychological safety measurably improves AI initiative success; however, only 39% rate their organization's psychological safety as high. If the wild, the scary and the unintended concerns are not being heard, it does not mean they are not happening; it means the conditions for that information to reach the surface do not yet exist.
  12. Clear principles for responding to the unexpected.
    This must be established now, before the unexpected arrives. While the specific response cannot be scripted in advance, the values and decision-making logic that will guide it can be. By anchoring a raw, unfiltered intake of reality to these predefined principles, the organization ensures that when the "scary" surfaces, it is met with a prepared logic rather than reactionary panic.

here is what we know with certainty

Every organization reading this has already introduced AI into work in some form and is already living with implications they anticipated, and implications they did not.

The organizations that will genuinely pull ahead in this shift are not the ones that moved fastest with the technology. They are the ones that move most intentionally with their people. They are the ones that ask, consistently and honestly: What are we not seeing yet?

That question is not a sign of doubt about AI; it is the highest-quality strategic thinking you can apply to it.

The research is clear: Success with AI is 70% based on people and ways of working and 30% based on technology. The organizations treating it the other way around are building foundations that will not hold. The ones investing in the human infrastructure, the change capability, the cognitive protection, the persona-led adoption design, the sensing mechanisms for what they have not yet predicted, are building something more durable.

We are at the beginning of something genuinely consequential. Therefore, the organizations that recognize that and respond to it with the same intelligence they bring to any significant strategic decision, will not just navigate this shift. They will define what good looks like inside it.

That is the opportunity, and it requires asking the right questions.

A living set of questions for every organization putting AI into work, grounded in what we are seeing, updated as more becomes clear, and designed to be used not as a checklist but as a thinking tool.

That is what we are building, and what comes next. 

If the implications in this paper feel familiar, if you recognize the second job, the cognitive shifts, the fragmentation of how your people are using AI, the things you suspect are happening below the surface of your metrics, then this is the right moment to look at them honestly.

Because they are the kind of slow, quiet, compounding things that are always easier to address earlier than later. 

We are ready for that conversation. Are you?

about the author

Sam has over 20 years of experience in the Talent arena. Most recently as a Head of Talent Acquisition at Barclays Plc. Sam led Professional Hiring globally and has partnered the full portfolio of business areas in Investment Banking, Consumer, Payments and Infrastructure. Sam was also the custodian of the Assessment and Talent Attraction services in addition to being the Accountable Executive for the partnerships with the global RPOs for all permanent and Contingent Worker solutions. She has driven large change programmes through design to embedment, delivering commercial and experiential objectives. Sam has passion for innovation, transformation and differentiation and the bringing together of people, process and technology for improved performance.

Profile Photo of Samantha Schlimper