In a recent report by Hiring Lab, the Economic Research team of the LinkedIn competitor Indeed, the impact of Generative AI on several jobs is analysed, by looking at the skills they require. Through this analysis, professions are ranked according to their susceptibility of being transformed or even replaced by AI.
The report finds that Software Engineering is the profession with skills at which Generative AI is best at, followed by IT Operations and mathematics. It is not surprising that a technology developed mostly by Computer Scientists, who write a lot of code1 and in the case of AI, do a lot of tedious linear algebra, is most helpful in those areas.
What is surprising is the data that this report is based on. Here is an excerpt of the report:
Starting with a universe of more than 55 million job postings mentioning at least one work skill published on Indeed over the past year, we identified more than 2,600 individual skills within those jobs and organized them into 48 skill families. We then asked ChatGPT, a leading GenAI tool, to rate its ability—either poor, fair, good, or excellent—to perform each skill family, with all skills within a given family assigned the same rating as the family overall.
“We then asked ChatGPT” – the authors of this report believe ChatGPT to be capable enough to accurately assess its own skills. In a way, this is a meta-study of a lot of unreliable data, as LLMs are trained on large datasets of human-written content. It is a particularly lazy meta-study, because sources are not identified, or their credibility evaluated, instead these authors trust the swarm intelligence of the internet on a topic that nobody has yet studied carefully.
There must also have been tons of articles out there on how AI will change Software Engineering before September 2021 (the alleged cutoff for GPT3.5, though nobody really knows) that have been fed into the LLM influencing this outcome. Here is one from 2020, for example, and here another one from 2019, and here something from March 2021. If GPT4 uses more recent data, then the result should be even less surprising.
Perhaps, the authors thought ChatGPT already has superhuman intelligence. If that were the case, it would still be pretty ignorant to assess the skill of someone by just asking them. That’s what tests are for.
I am not claiming that Generative AI does not and will not have a big impact on Software Engineering. I personally have used GitHub Copilot extensively, and it has increased my productivity for tedious repetitive tasks like writing boilerplate code or defining similar classes. It is especially useful when working with multiple languages: I can define an interface for a POST request in TypeScript and ask ChatGPT to create a corresponding class for me in C# to get the same structure in the backend.
But, LLMs are not magical tools that will spit out the correct answer to everything, especially not on a topic where there have not been enough data points (e.g. reports that are not based on data provided by LLMs) to inform the underlying probabilistic model.
-
Except the theorists, of course. ↩︎