Adam Groenhout

The Shrinking Moat: Examining Job Security in the Age of AI

imgur

The Moat is Meager

Allow me to be pessimistic for a moment. This is not my normal state of mind, but this kind of thinking on this topic has continued to bubble up over the past couple of years more than I would like to admit. Heads up, this piece is more about feeling than analysis.

Right now at work, people like dealing with you (you’re a friendly human). Eventually, most people will be viewed as bottlenecks. You're slow. Your weaknesses show. Coworkers and clients want things done fast and at the highest quality. They see you as a barrier. Your moat is that AI isn't plugged into workflows and data stores, yet. A lot of information is in people's heads and generally in the physical space. AI can't access that. People don't trust AI decisions. However, this is temporary. AI will integrate with all information streams, including meetings, conversations, document and code repositories, and external data feeds. Once AI gains access to and learns the full context, the moat will disappear.

So your real moat is time. Your job, as it is, will go away. AI will take it. You'll need a new one. Before that though, people who use AI better than you will shrink the moat. They'll do your job faster. They'll manage AI agents. They'll be more productive. You're defending against the future. And against those who get there first.

The Moat is Shrinking

Every day, significant news drops about advancements in artificial intelligence (AI), and very often, these drops feel like one more tug on the plug holding back the water from draining out of the “moat”. The concept of a moat is frequently discussed in terms of how businesses defend themselves against competitors. It is the unique selling point(s), differentiators, and strengths that make an organization stand apart. This metaphorical moat serves as a barrier to protect a company's unique position in the marketplace. For individuals and their professional work, the moat protects job security.

AI gets faster. It gets more consistent and accurate, hallucinating less and making fewer mistakes. It connects to more information. It becomes more autonomous and independently capable. We (humans) can't keep up. Our moat is drying up, and fast. If this seems dramatic and hyperbolic, I don’t think you are paying close enough attention to what’s happening.

imgur source: https://ourworldindata.org/artificial-intelligence

Where do things stand right now? Where is our moat?

The Moat is Temporary

The moat for knowledge workers like myself is shrinking. This applies to tasks and to entire jobs. Nearly all kinds of work will be changed by AI. People’s roles and the tasks they perform will inevitably shift, so current roles will not look like they do today. On the other hand, many entire job categories will go away and so some people will lose their jobs.

AI operates with a speed and consistency at scale that outpaces human ability by a wide margin. Its precision and capability to ingest vast amounts of unstructured information near-instantaneously makes humans look down right inept. What’s left right now?

Right now, I think much of the common moat comes from four main areas.

Slow AI adoption

Organizations themselves provide some moat by creating insulation, albeit very temporary. Employees have a temporary advantage due to the fact that their companies have not yet meaningfully integrated AI into their systems and workflows. Organizations are slow to adopt AI due to a variety of factors. High costs, data security concerns, job replacement fears, integration complexities, unclear regulations, and lack of awareness are major reasons for slow AI adoption in organizations. Generally speaking, it is just a matter of time before this type of moat withers too, either until AI becomes deeply integrated, or the organization fails to do it and dies.

Distrust of AI

While AI has improved markedly, there remains a widespread lack of trust in its capabilities, especially when it comes to critical decision-making or tasks with significant consequences. Right now, AI is not as controllable, predictable, and consistent as humans in certain scenarios. While humans know when they don’t know something and navigate accordingly (hopefully with honesty), AI sometimes conceals its lack of understanding. Concerns about errors and the potential for unintended harm contribute to this distrust. Some of these concerns are antiquated and misplaced, and those that are valid today will be tomorrow. Building trust in AI systems through transparency, explainability, and accountability will be crucial for wider adoption and acceptance, and progress in this space is rapid.

Preference for humans

Despite advances in AI, the desire for human connection and interaction remains strong. People often prefer to engage with other humans, particularly in situations that require empathy, emotional intelligence, or nuanced understanding. It’s often said that building AI systems that can complement and enhance human interactions, rather than replace them, will be essential for successful integration. This may be true in the short term, but I don’t think this will buttress task and job displacement for long. In the not so distant future, when humans want a human interaction, a human interface, they may just turn to virtual AI avatars that appear “more human than human.”

Limited AI data access

Much important information is not accessible to AI. A substantial amount of critical knowledge is undocumented, residing within the minds of individuals, which AI cannot access when making assessments and decisions. As the documentation of knowledge expands, through recorded meetings, automated interviews, and comprehensive data collection, AI will gain access to this formerly inaccessible information. Once AI can effectively tap into all this, the human moat will diminish significantly. Additionally, access to sensitive and proprietary organizational data is limited due to weak integration barriers, lack of sensor deployment, security concerns, privacy regulations, and the potential for misuse. This limited access hinders the development and effectiveness of AI applications that rely on large datasets and comprehensive information. It will soon be considered common knowledge that AI must be given as much organizational information and context as possible for an organization to remain competitive and survive.

Beyond the Moat

Jobs are quickly losing their protective "moat" from AI. Current defenses, slow AI adoption, distrust, human preference for human interaction, and limited AI access to data are substantial, but are temporary. AI is improving rapidly, and once it's fully integrated and trusted, many of the jobs we know today will be taken over. I suspect this transition will progress exponentially.

I will end on a positive note here. The caveat is that all of these things are nuanced and there are exceptions to every rule :). In all seriousness, there is a positive vision for the future of work where AI handles the boring tasks, allowing humans to focus on creativity and strategy. This is a future where the drudgery of laborious and tedious tasks is not part of any job, and work is largely dreaming, defining a vision, and communicating all that AI for execution, with shaping along the way. Sounds great to me.