Advertisement

Will AI really change the nature of work?

There's nothing inherently 'intelligent' about AI, says Raghav Singh. As such, they may not change work as quickly as we think...

Article main image
Jun 12, 2023

The hype around AI is reaching new heights every day.

We only have to witness burgeoning news stories on the topic all predicting the impending doom of the human race.

The latest report featured Geoffrey Hinton – the man widely considered to be ‘The Godfather’ of artificial intelligence (AI). He’s just quit his job at Google, so that he could warn about the growing dangers from developments in the field.

According to Hinton: “It is quite conceivable that humanity is just a passing phase in the evolution of intelligence….you need biological intelligence to evolve so it can create digital intelligence. Digital intelligence can absorb everything that people learned and then it can get the direct experience with the world. It may keep us around for a bit to keep the electricity plants around.

Wow! The Matrix really was predicting the future.

“You hear that Mr. Anderson?… That is the sound of inevitability… It is the sound of your death… Goodbye, Mr. Anderson…”

The hype is all consuming…

More than the results produced by queries to these products is the ability of the marketing teams supporting AI to impress people who should (supposedly) know better. One writer in a national newspaper described LLMs (Large Language Models) as “… humanity’s ability for the first time to manufacture something in a godlike way that approaches general intelligence.”

Take what sci-fi writer, Arthur C. Clarke, wrote in 1962: that “any sufficiently advanced technology is indistinguishable from magic.”

Sure, ask any of the new large language models (or LLMs) behind Jasper, Bard, or ChatGPT to write an essay, a blog post, or analyze a table and – just like magic – the result is a reasonably well-written response.

But it’s not magic, anymore than David Copperfield making a 747 disappear or any performance by an illusionist on America’s Got Talent.

These LLMs produce arrangements of words, simulating human expression. The algorithms allow them to assemble sentences based on the probability of particular words following others.

…but the nature of work will change only slowly

So what impact will the LLMs have on jobs and work? AI will undoubtedly alter the nature of work, as any major technological change always does, going all the way back to the invention of the printing press in the 15th century.

But let’s separate hype from reality.

One assessment of the impact of AI is that 80% of jobs today include one or more tasks that can be done by AI.

What that means is that some tasks in 80% of jobs will be done differently, not that 80% of jobs are at risk of being automated as many headlines imply.

IBM expects to fill about 8,000 jobs with AI in the next five years. Well, the best laid plans of mice and men…

A survey of employers by the World Economic Forum in 2020 showed that respondents expected that almost half of business tasks would be automated in the following five years, rising from 35% to 47%. But the level of automation today is barely 1% above what it was then.

The jobs most at risk are those that are heavily focused on communicating, coordinating and content generation, like writers, PR, marketing and lawyers.

LLMs can produce reasonably good content where quantity is needed and quality is not a major requirement, such as press releases, blurbs, product descriptions, boring PowerPoint presentations produced for the sessions at conferences, etc.

However, this is content that is usually glanced at, glossed over, and/or quickly forgotten.

Sure, legal briefs, wills, rusts, contracts and agreements – the vast majority of which are templates and include boilerplate language –also be generated by LLMs. Consequently those engaged in such jobs may need to retrain as prompt generators. But content that requires creativity and original thinking may be beyond the even capacity of an LLM to deliver.

For one, what they produce represents the middle view of content posted on the internet, and the output is designed for coherence.

The algorithms are also constantly being tweaked so as not to produce any output that may be considered offensive or biased. The rules for these are entirely subjective and reflect the prejudices of the companies producing the LLMs.

Programmers are using LLMs to correct their code and it’s not inconceivable that eventually most computer code will be produced by an LLM based on prompts by programmers. But such advancement may be slow in coming. Computer code that LLMs are trained on is mainly that which is available in the public domain. Much of the code for the best software is held by companies and not available for a training data set.

Already there are indications that LLMs may be reaching the limits of advancement. Training an LLM costs a lot – OpenAI spent about $4.6 million to train GPT-3, and almost $100 million to train GPT-4.

Training LLMs gets expensive faster than it gets better, since the requirements for computing-power scale up much faster than the input data.

But the biggest barrier to further development of LLMs is that the trove of training data may already have been used up.

The LLMs have consumed just about all the high-quality content available on the internet. More content is being continuously generated, but not in volumes sufficient to allow for much advancement of the training.

Not ‘Deus Ex Machina’

LLMs are word machines, not gods from a machine.

There’s nothing “intelligent” about them.

A better term to describe them may be one suggested by Stefano Quintarelli, a member of the High-Level Expert Group on Artificial Intelligence of the European Commission. He calls them Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI).

SALAMI is likely to result in a decline in the need for record-keeping and administrative roles, including cashiers and ticket clerks; data entry, accounting, bookkeeping and payroll clerks; and administrative and executive secretaries.

Large-scale job growth is expected in education, agriculture and digital commerce and trade.

But the declines and increases depend on how quickly and effectively the technology is built into applications and processes are adapted to allow them to be used productively, and that’s not baloney.

 

Get articles like this
in your inbox
Subscribe to our mailing list and get interesting articles about talent acquisition emailed weekly!
Advertisement