Generative AI will displace fewer jobs than you think
The current generation of AI is productivity boosting, but will ultimately fall short of expectations without significant, unknown scientific discoveries.
I have been on the Generative AI train since before day one. My first brushes with Natural Language Processing (NLP) go as far back as 2008. I founded a startup that focused on NLP applications for financial markets trading systems in 2011.
Even with a deep background, I was blown away when first GPT 3 and later GPT 4 came out, and I started integrating them into my workflows immediately, to the point where today, Generative AI are an integral part of my daily workflows.
They are immensely useful, productivity boosting. I’ve previously written at length about how I use them for some of my work.
But with deep familiarity also comes awareness of the limits and shortcomings of current, and likely immediately coming generations of software: hallucinations are not getting better. Reasoning is lacking. They get stuck in local maxima and loops, and much more. These issues don’t look to go away, as we have practically exhausted all sources of training data, and improvements are now primarily focused on the inference side.
To make a long story short: to make significant progress from where we are today, we will need to make several, as of now, unknown scientific discoveries. Sure, we can polish what we have and make it smoother, less error-prone, but it ain’t AGI.
Unreasonable business risks of replacement
So we have an AI that hallucinates, gets things wrong 10-20% of the time (conservatively estimated), and lacks reasoning.
Where does that leave us in terms of unsupervised operation? What human tasks in business are acceptable with these sorts of problems?
Call-centers and customer support have been mooted as one area, and Klarna claims to have made massive inroads here (nothing to do with an overfunded company struggling to live up to past funding rounds valuations, wanting to boost it by being an “AI company”, but I digress).
I’m not sure if I buy it: sure, call-centers and customer support may be candidates, IF you are happy with basically an interactive FAQ style level of customer support. But for how many businesses will this be acceptable? Also, if the LLM Agent hallucinates discounts, rewards or refund policies, businesses have to fulfill them, is this going to be ok?
Let’s move up the stack to more intellectually demanding jobs: programmers seem to be on the shopping block quite a lot. But if LLM produced code is wrong 10-20% of the time, and it is uncritically deployed to production, it is probably going to be a business ending issue a relatively high percentage of the time.
Fire all the programmers, who are equipped to evaluate the output, while letting managers and business analysts dictate to an LLM? Good luck.
Who is getting replaced anyway?
It’s undeniable that AI will allow us to do more with less, I already wrote about the paradoxical increase in demand for human intelligence that will result from this.
But let’s assume for a second we simply keep doing the same things as today, with fewer resources, and let’s focus on technology.
Who is best equipped to “manage” AI, evaluate and correct its output? Undeniably, it will be software engineers. So in practice, it is simply an evolution of the profession: writing even less code (coding is only a fraction of what software engineers do), spending more time reading code.
In practice, we’ll see increased demand for software engineers with a flair for product management, prioritization and evaluating trade-offs. The profession will become more demanding, and the bottom 10-20% of “code monkeys” will be cut loose, as their value add in this new reality is negligible.
What are the secondary effects of engineers taking on more responsibilities? We’ll see reduced demand for the ancillary skill sets and roles. Non-technical Business Analysts are on thin ice. Product Managers will continue to exist, but there will be fewer of them. Manual testers have long been disappearing and will continue to do so.
So if the above comes to pass, there will be significantly fewer people to manage. What, then, happens to the managerial classes, when there are fewer, more self-organizing people to manage? I’ll let you finish my next sentence on your own.
The fallacy of replacing “the doing”
The fundamental fallacy of theories of job displacement through technology is that those with the expertise in doing the work will get replaced, while those without it (managers, analysts) will assume their responsibilities.
This is an assumption based on hierarchy, not fact. Feeding the foot soldiers to the grinder, while those “above” are safe.
If you instead turn to assumptions and questions based on unit-economics, the outcomes become quite different. Whose economic output is likely to be increased the most by new technology? Whose skill set is likely to be increased the least? If fewer people will indeed be required, what roles and overheads will disappear?
Conclusion
My thesis is, given the error rates of Generative AI, fewer jobs than we think will be entirely displaced, even low-end ones.
The current zeitgeist around displacement is also wrong: applying a lens of whose output is most likely to benefit from new technology, and who is most adept at managing it, means these assumptions will be turned on their heads.
Finally, being able to do more with less, is primarily detrimental to roles that are overheads in relation to the doing: the roles that are necessary evils when managing at scale, but which may be surplus to requirements as team-sizes to achieve the same thing shrinks.