TechScape: Will OpenAI’s $5bn gamble on chatbots pay off? Only if you use them

<span>Most people have used cutting-edge AI but the magic is wearing off.</span><span>Photograph: Carol Yepes/Getty Images</span>
Most people have used cutting-edge AI but the magic is wearing off.Photograph: Carol Yepes/Getty Images

What if you build it and they don’t come?

It’s fair to say the shine is coming off the AI boom. Soaring valuations are starting to look unstable next to the sky-high spending required to sustain them. Over the weekend, one report from tech site the Information estimated that OpenAI was on course to spend an astonishing $5bn more than it makes in revenue this year alone:

If we’re right, OpenAI, most recently valued at $80bn, will need to raise more cash in the next 12 months or so. We’ve based our analysis on our informed estimates of what OpenAI spends to run its ChatGPT chatbot and train future large language models, plus ‘guesstimates’ of what OpenAI’s staffing would cost, based on its prior projections and what we know about its hiring. Our conclusion pinpoints why so many investors worry about the profit prospects of conversational artificial intelligence.

The most pessimistic version of the story is that AI – specifically, chatbots, the expensive and competitive segment of the industry that has taken the public’s imagination by storm – is simply not as good as we’d been told.

That argument suggests that, as adoption has grown and iteration has slowed, most people have had the chance to properly use cutting-edge AI, and have started to realise that it’s impressive but perhaps not useful. The first time you use ChatGPT it’s a miracle, but by the 100th time, the flaws remain apparent and the magic has faded into the background. ChatGPT, you decide, is bullshit:

In this paper, we argue against the view that when ChatGPT and the like produce false claims, they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting … Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.

Train them to come

I don’t think it’s that bad. Not because the systems are flawless, though; I think the AI transition is falling at an earlier hurdle. For people to try chatbots, realise they’re bullshit and give up requires people to meaningfully try them at all. And that, judging by the response of the tech industry, is starting to be the bigger hurdle. Last Thursday, I reported on how Google is partnering with a small-business network and multi-academy trusts to introduce AI into workplaces to help boost workers’ abilities rather than replace them. Debbie Weinstein, managing director of Google UK and Ireland, told me:

Part of what’s tricky about us talking about it now is that we actually don’t know exactly what’s going to transpire. What we do know is the first step is going to be sitting down [with the partners] and really understanding the use cases. If it’s school administrators versus people in the classroom, what are the particular tasks we actually want to get after for these folks?

If you are a school teacher some of it might be a simple email with ideas about how to use Gemini in lesson planning, some of it might be formal classroom training, some of it one on one coaching. Across 1,200 people there will be a lot of different pilots, each group with around 100 people.

One way to look at this is that it’s just another feelgood investment into the skills agenda by a large company. Google, in particular, has long run digital training schemes, once branded as the company’s “Digital Garage”, doing its part to upskill Britain. More cynically, it is good business to teach people how to use new technology by teaching them how to use your tools. Britons of a certain age will vividly remember “IT” or “ICT” classes that were a thinly veiled course on how to use Microsoft Office; those older or younger than me learned somewhat foundational computer programming. I learned how to use Microsoft Access.

In this case, there’s something deeper: Google doesn’t need to just train people to use AI, it also needs to run a trial to even work out what, precisely, they should be trained in doing. “This is much more about little everyday hacks, to make your work life a little bit more productive and delightful, than it is about fundamentally overhauling an understanding of technology,” Weinstein said. “There are tools today that can help you get your job done a little bit more easily. It’s the three minutes that you save every single time you write an email.

“Our goal is to make sure that everyone can benefit from the technology, whether it’s Google’s or other people’s. And I think the generalisable idea that you would work alongside tools that can help you do your life more efficiently feels like something that everyone can benefit from..”

Since ChatGPT arrived, there’s been an underlying assumption that the technology speaks for itself – helped by the fact that, in a literal sense, it does. But chat interfaces are opaque. Even if you’re managing an actual human being, it is still a skill to get the most out of them when you need their help, and it’s a much greater skill if your only way of communicating with them is a text chat.

AI chatbots aren’t people – not even close – so it is commensurately more challenging to even work out how they can fit in a typical working pattern. The bear case for the technology isn’t “What if there’s nothing there?” Of course there is, even given all the hallucinations and bullshit. Instead, it’s far simpler: what if most people just don’t bother to learn how to use it?

Mathsbot gold

Meanwhile, in another bit of Google:

Even though computers were made to do maths faster than any human could manage, the top level of formal mathematics remains an exclusively human domain. But a breakthrough by researchers at Google DeepMind has brought AI systems closer than ever to beating the best human mathematicians at their own game.

A pair of new systems, called AlphaProof and AlphaGeometry 2, worked together to tackle questions from the International Mathematical Olympiad, a global maths competition for secondary-school students that has been running since 1959. The Olympiad takes the form of six mind-bogglingly hard questions each year, covering fields including algebra, geometry and number theory. Winning a gold medal places you among the best handful of young mathematicians in the world.

The caveats: the Google DeepMind systems “only” solved four of the six problems, and one of them was solved using a “neurosymbolic” system, which is rather less like AI than you might expect. All of the problems were manually translated into a programming language called Lean, which allows the system to read it as a formal description of a problem rather than have to parse the human-readable text first. (Google DeepMind tried using LLMs to do this part too, but they weren’t very good.)

But even still, this is a pretty big step. The International Mathematical Olympiad is hard, and an AI scored a medal. What happens when it scores a gold? Is there a step-change between being able to solve challenges that only the absolute best high-school mathematicians can tackle, and being able to solve ones only the best undergraduates, and then postgraduates, and then doctors, can? What changes if a branch of science gets automated?

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday.

Advertisement