Valuing AI and the Human Condition in Organizations: How businesses can avoid using AI for a world no one wants to live in 

May 5, 2026

AI is everywhere. The potential to fundamentally transform organisations and markets is enormous. There are both ambitious expectations around efficiency and profit; and there are fears that activities and roles will be devalued and replaced. AI is often simultaneously deified and demonised. When it comes to AI, we are dealing with tensions and contradictions. They are particularly hard to navigate, because it’s not foreseeable in which directions AI will change the world. Only if businesses value both AI and the human condition can they turn AI from an efficiency tool into something genuinely visionary.

The Fetish of Efficiency and the Space of the Human

The simplest and apparently strongest argument for using AI in organisations is efficiency: AI delivers remarkable efficiency gains, and with them competitive advantage. The argument is simple, compelling – and it’s true.

But it has a flip side: if everyone automates through AI, the competitive advantage disappears. A classic example of this levelling effect of broad technology adoption from economic history: Ford invented the assembly line and revolutionised automobile production. The time to build a Model T fell from over 12 hours to under 2. Then General Motors copied the assembly line, cancelled out Ford’s advantage, and won on other factors entirely.

The same dynamic is at work with AI: once AI use crosses a critical threshold of broad adoption, competitive advantages become incremental, and other factors become decisive: how organisations organise themselves around AI, whether they cultivate human thinking, and whether they have given the human factor sufficient room, human abilities, human-to-human relationships, human sense-making, and meaning.

An organisation that sacrifices these organisational, cultural, and human factors to AI-driven efficiency gains will have, at best, a few shining years. But in the long run it manoeuvres itself into a dead end with no emergency exit.

The Right Balance

When it comes to AI implementation in businesses and organisations, the right calibration and balance is decisive.

AI is more than a tool, more than classical technology. You underestimate organizations and businesses when you view them only in technical terms. But – and that might come as a surprise – you also underestimate the AI side itself, when you only understand it in technical terms.

When human organisations are underestimated

Human organizations are underestimated when their problems are seen as technical problems and AI as the universal problem solver for everything. That is a technocratic mindset: organisational and cultural problems are framed as technical problems and AI appears as the perfect technical solution.

The problem with that view: human organisations and culture are treated as something they are not, and in doing so they’re missed entirely.

Culture, human abilities, interactions, and relationships in organisations are something other than technical problems. However radical and far-reaching AI automation may be: in organisations, people always have to deal with other people alongside technology and artefacts like things, systems, processes. And at the end of the day, it’s always people who use AI.

You can try to resolve the many problems and frictions that arise from human and organizational imperfection with AI, but there are still humans who are confronted with that solution and have to deal with it: you can’t get the human out of organisations. Wanting to dissolve the problems of human organisations through technology is therefore a pseudo-solution that only creates new problems.

The inescapable human condition: AI has to make sense for humans

Humans live in a physical world, they have bodies, they are vulnerable and mortal, they are limited and finite. The horizon of our thought and action is always an expression of this human condition. That’s why we think and act differently than an AI. AI can support and augment our finite human life, but it always has to be translated into what makes sense for humans. Otherwise, it simply doesn’t make sense for us.

Currently, humans are being replaced by AI at an unprecedented speed, comparable to the industrial revolution. Roles, tasks, activities are being taken over by AI. But there are at least three distinct areas where we can’t replace the human without eroding it:

Genuine Human Abilities: There are abilities we share with AI and that AI is even better at. They come into play whenever linear reasoning, replicable rules, or pattern recognition is involved: identifying causes, drawing on established solutions, executing. But there are genuine human abilities that can’t be automated without loss. In such cases solutions emerge from the unique human capacity to act under uncertainty and deal with ambiguity, making unusual, creative connections. They don’t come from intelligence alone, but from intelligence embedded in the human condition, our bodies, emotions, and relationships, our finitude. As long as we’re not building perfect human replicas (and perfect here means: just as imperfect as real humans!), these abilities remain meaningful.

The Human Self: To be a human self means to think and act in the world as someone who stands behind their thinking and their actions. This is indispensable when it comes to decision-making and accountability, to judgment, curiosity, storytelling, and sense-making. Without it the self dissolves into a sea of unaccountability.

Human Relationships: As humans we are always part of a web of human relationships. Solitude is only possible because we are always already with others. We unavoidably interact and refer to one another. Humans set up organisations for organising human relationships. Managing people, building communities, and leadership are essentially about relationships. Without relationship skills there is no trust-building, empathy and genuine perspective-taking, there is no mutual understanding, agreement, commitment, or excitement.

Overriding genuine human abilities, the self, and relationships means losing our connection to ourselves and to other people. That is particularly dangerous for organizations: 

  • When we abandon our genuine human abilities we become dependent on AI and unlearn skills, we trap ourselves in an AI echo chamber that confirms our biases. 
  • When we neglect our self we lose meaning, purpose, and motivation. 
  • When we disregard human relationships we lose trust, produce alienation, and lose the social cohesion of an organisation. 

In other words: We would be implementing AI not for a human, but for a non-human world. And that’s not just overwhelming, it’s worse: it’s boring.

When AI is underestimated

An organisation that sacrificed the human factor also fails doing justice to AI itself. It loses its ability to absorb and recover from AI failures and it even loses the benefits of AI itself.

Through a distorted AI lens we might be seduced to view organisations as technical. But we can be similarly seduced to have a too technical view of AI itself: We view it as a local tool that simply helps to accelerate and optimise existing processes and systems. AI is treated like the chainsaw that replaces the handsaw. AI is boxed in instead of experimenting with and unlocking its potential. As it is one-sided to view AI as a universal fix-all, it is equally one-sided to simply view it as another piece of office software.

There are three common ways of boxing AI in that miss its deeper potential:

AI as an input-output machine: Using AI as an input-output machine goes like this: Put in the right input, get the right output, HR gets its template, finance gets its summary, marketing gets its copy. End of the story. But AI isn’t a procedure you follow and execute, but a conversation that you shape and develop, that puts down its own roots and is dynamically adapted to its environment. The value isn’t in the correct syntax; it’s in the judgment, context, and thinking you bring to the exchange.

Query logic thinking: Ask, receive, move on. Most people still approach AI like a reference book or a search engine. They look something up, get an answer, close it. The real work happens in the back-and-forth: pushing back, following a thread, thinking with it rather than extracting from it.

AI fluency as a technique: Many organisations have no real AI strategy, they simply integrate and scale the individual and self-taught AI use of their members. Other organizations go further, but they in turn reduce organisational AI fluency to an isolated skill that can be trained or delegated: a few AI officers are appointed and burdened with the task of creating an AI-fluent organisation; or organisations run prompting workshops and leave it there. But technique is only an entry point. What actually matters is the intuition that is built through sustained, complex use and how AI is institutionalized and embedded in the thickets of the organization.

What happens after fermentation?

AI implementation in human organisations has to account for both sides, the human side and AI. AI and human organisations are like yeast and water: yeast is inert on its own. Water alone is just water. Add them together: fermentation begins, something new and alive happens. But: How much yeast do you want to use, how warm should the water be, what happens after fermentation?

In other words: Accounting for both the human and AI in organizations is not where the story and the work ends. It is where it begins. It is where things are no longer boring but genuinely exciting and potentially visionary.

Written by Georg Spoo, Manager – Engagement Design & Delivery.


Let’s Collaborate on Your Upcoming Project