Predicting the Future

 

image from Pexels.com

In 1982 the Musicians' Union in the UK passed a motion to ban the use of synthesisers in recorded music.

The synthesisers of that time, with their ersatz string sounds, were going to put real musicians out of work, and so in a desperate (and mostly symbolic) effort to stem the tide, the union took a stand.

Orchestras are still alive and kicking in 2025, despite the proliferation of music technology since the union passed that resolution. But sampled libraries are also available to every bedroom producer, so anyone can record a string quartet, no matter their budget.

Have session musicians lost out as a result of this technology? Probably. A low-budget production will opt for samples over a real string section every time.

Musicians, however, are still around. Some of them are even working. And no one is blaming the advent of the synthesiser for the current problems in the music industry.

The future is hard to predict.

Technologists make all sorts of confident predictions that never materialise. In 2011 IBM's Watson was pitted against human champions on the quiz show Jeopardy, and beat them. A flurry of prophesies soon followed: Watson would be helping doctors diagnose patients, and human-like intelligence was just around the corner.

In 2016 Geoff Hinton, a well-known AI researcher, said this:

People should stop training radiologists now. It is just completely obvious that within five years deep learning is going to be better than radiologists.

In 2025 radiologists do use machine learning to assist them with certain tasks, but AI is not about to replace the trained humans in the field.

Which brings me to Elon Musk and Sam Altman.

Elon Musk has been promising full self-driving cars since 2013. There are many quotes to choose from, but this one is from 2015, ten years ago:

We're going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years.

How's it coming along Elon?

Self-driving cars are not that easy to build. Machine learning has very real limitations, the main one being that the machine cannot handle situations not found in the training data. It cannot improvise. Anything unfamiliar will result in unpredictable outcomes, sometimes with fatal consequences.

A human driver, when faced with an upended trailer truck on the highway for the first time in their lives, will apply the brakes and stop. A Tesla might plough straight into it.

Sam Altman was also on record making bold predictions about self-driving cars and the role of AI in medicine at the tail end of the last decade. Needless to say, none of those predictions came true.

And here he is today, making grandiose statements about the future. A future in which the machines are vastly more intelligent than humans. ChatGPT 5, which launched last week, is hyped as having "PhD-level experts in your pocket". It wasn't long before the internet pushed back with endless examples of bullshit spewed by the new model.

Elon Musk and Sam Altman are hype merchants. Marketers aiming to boost their companies and products. They are about as trustworthy as a used-car salesman, maybe less.

The technologies around machine learning will definitely be useful, some day. But the hype around AI is just that: hype. What it all leads to is quite unpredictable, but my guess is that it will be different from what the hype-merchants are selling.

When JFK made his famous moonshot speech in 1962, no one could have predicted that 50 years later people would be using satellite navigation to find their way to the grocery store.

The space race was another epic technological shift with unpredictable outcomes. In 1962 people probably imagined a future of space travel and flying cars. The reality instead is satellite TV and GPS. Still useful, but less grandiose.

We can't predict the future, so don't believe those who confidently claim they can.

 
Richard YotComment