I almost wrote something positive about AI yesterday… I’d just signed up with a new provider and one of their offerings was an AI website builder.

So, thinking it might be fun I tried it.

I gave it a fairly basic prompt and hey presto, a rather nice looking site. I thought “that was good… lets start again and give it some colour preferences this time”.

Guess what? It spat out exactly the same site again ignoring all my prompts to change the colours. Even after 3 attempts it completely ignored my colour preferences.

And that is the reality of AI. While it has great potential, in most implementations its just an un-polished token gesture, a turd on a pavement of wasted time.

So… how much can we really expect from the current generation of AI?

As with most new technologies the hype far exceeds the reality. Its resource overheads are staggering and the returns are mediocre at best.

For creative writing and image generation it’s pure genius, just as long as you know you’re not actually using someone else’s work or intellectual property. On which note it’s important to say:

BE CAREFUL!

Plagiarism

There are already legal battles being fought by content creators on the grounds many AI have ingested content, for example open source software or artwork, without considering the licensing agreement.

Claiming “An AI made me do it” is, for now at least, not a defence in a court of law. Although we might think our content is unique and belongs to us, how do we know…

If we look at how intellectual property is handled in the music industry for example, the fact is just by sounding like someone else you could be forced to pay compensation. Even if you could prove you’d never heard them before.

Truth and Reality

When it comes to facts and figures, don’t waste your time. AI will tell you it’s providing facts and after a few iterations, admit it made the whole thing up. It’s designed to be convincing.

This week (end of April 2025) OpenAI actually withdrew a couple of updates purely because many of the answers generated are pure fiction, despite the AI insisting they’e facts, and it seems to have become quite offensive.

Apparently it doesn’t like being wrong and will argue even after it’s been caught out.

If you don’t fully understand a topic, or do the appropriate research to verify an answer from an AI, you’ll very likely embarrass yourself before long. AI doesn’t concern itself with facts, just what it thinks is the most likely response.

In relation to such things as politics and other controversial subjects, i.e. where there is a great deal of “opinion” involved, most of its answers are utter nonsense. Certainly not the sort of rubbish we’d want our children “learning” from.

Don’t get me wrong, AI is brilliant at the things it can do well. The issue is, it’s even better at giving a very convincing answer that’s completely wrong.

Update: 16th June 2025

I watched a talk a couple of weeks ago based on the premiss “if AI code is so good why is there nothing in the open source community about it?”.

Good question… nothing I’ve read or watched from the open source community has paid little if any attention to AI. The best of the best seemed to be ignoring it.

Then I saw a video by Adam Wathan, author of the Tailwind CSS library… he looked shocked. He’d decided to give Claude Code a proper go and built an app entirely with it. The short story being, it might not be great but he got a lot more done than he would have without it.

Claude had managed to do all the boring stuff that still takes time, but it did it a lot faster.

So… I got a Claude Pro subscription and though it’s not a great coder in niche cases, so far it seems good enough to speed anyones workflow up… as long as their quick at spotting the mistakes. More to come on this…

But…

That said, a very annoying trend is happening as a result of the need to feed AI fresh data… Many sites, for example StackOverflow, are putting a “prove you’re human” captcha up when they first load.

This is even happening innocently, perhaps even here, because CDN providers are doing the same.

I’m guessing they’re not happy with the extra heavy traffic resulting from anonymous bots etc. harvesting their data for free. Either way it’s very annoying and should stop…

AI is predictive text, little more… and the primary problem with that is the data it’s been trained on is often wrong, misleading or out of date. Being designed to be conversational and, at the same time, to tell you what you want to hear is what makes it so incredibly dangerous.

Having a very convincing liar on the team is never an asset.

So why the hype?

There are three main reasons AI gets so much hype:

  • In certain use cases it’s actually brilliant
  • Desperation
  • Crashing prices

An obscene amount of money has been spent developing AI and trying to exploit it. Most of those initial investments were a) completely disproportionate to the value added or b) made obsolete by newer models faster than they could be fully exploited.

Better, faster and cheeper models appear almost weekly and price slashing is brutal.

What once looked like a tight little industry with a lucrative future for the few, has now been flooded with options. As such most of the initial investments in AI have so far haven’t paid off. Not just that, they’re obsolete.

Security and Privacy

Although the paid versions of certain AI models claim they don’t “learn” from your data, that’s actually been proven a very dubious statement.

Ignoring the intricacies of what and how AI might or might not store, it’s worth noting that a judge in America just passed a law stating that all user input into AI must be collected and made available to the authorities.

That said… you could therefore be breaching GDPR simply by using an AI that falls under the jurisdiction of the USA. This type of thing has happened before. For example MailChimp users in Europe were found to be in breach as the MailChimp servers were based in America.

As AI become more and more integrated into our digital life it’s not hard to imagine scenarios where it could do significant damage. Ignoring data privacy etc. consider the impact of poorly written code, or a bug.

AI are mostly trained on data that’s at least a couple of years old. Not just that, almost no effort has been put into ensuring the training data is good.

As a result AI generate code is often below par when it comes to performance. Not simply because it doesn’t have the brains to know good code from bad in most cases, but because it doesn’t have knowledge of more recent developments in programming languages, security or even vulnerabilities.

Quality aside, now many of us pay by usage for web hosting, a simple bug (e.g. an infinite loop of some kind) can take our monthly bill from a few pounds a month to a few thousand.

This has already happened…

Ownership

Legally you can’t copyright AI generated images for example, unless you can demonstrate significant human effort went into the final product.

Maybe that’s not such a big deal, but you also have no way of knowing if your AI has actually infringed on someone else’s intellectual property.

Consider the fact that in the music industry all you have to do is sound like someone else and you can get into trouble.

How will that transfer to the digital world… what happens if you get caught using another person or organisations intellectual property?

Is it possible the owner of that property will say “Never mind… that’s what you get from AI”…? I seriously doubt it.

Frightened?

Not me… well not of what it’s doing but I am a little concerned about the impact it’s having on our economies.

The fact is most AI is considerably more expensive to run than is currently being charged. To date, pretty much every significant AI has become obsolete within months or weeks of release.

One week we hear how this latest “blah blah blah” is a show stopper, the next it’s not even a “has been”.

More money than ever is now being invested in products that simply can’t sustain themselves without huge amounts of excess cash. Hundreds of billions of investors money is slowing disintegrating in an economy that’s doing worse.

Another point worth considering… I worked quite a bit in process automation over the years. I noticed that, in banking for example, where the members of a particular team would have been able to do their job on paper 20 years ago, these days they can barely click the right button that automates that job.

Deskilling an organisation and then putting it at the mercy of AI that’s not liable for the answers it estimates sounds like a bad idea.

The bottom line is that a) it can be hugely beneficial if you can afford to train it, and b) for it to be sustainable the prices have to go up considerably.

Even if you were to work with local servers to avoid compromising your data, it’s still not a viable option for most as the hardware required to get even average speeds is hugely expensive and the total cost will be thousands a month at least.

For me it’s a love hate relationship. I love the idea, but I hate the reality…