Many thanks to all who joined us on February 28 to take part in our interactive demos of generative AI and the discussion that followed. We got a lot out of it and hope you did too.
For those who were able to make the event – and those who couldn’t – we thought it would be useful to summarise some of the discussion, so kindly curated by Jane Wakefield, AI guru and former tech correspondent of the BBC.
- Generative AI is here to stay. It’s a question of adapt or fail. At the same time, we need to recognise it’s in its infancy. Like an over-confident public schoolboy, it can say the wrong thing with disarming confidence. In part that is because it is hugely manipulable depending on the content that it is drawing on from the web. Trust is therefore a big issue. Reputation is at risk.
- Generative AI content, whether text-based or visual, can only draw on what already exists. It cannot come up with anything new. It’s like a newspaper story based on cuttings rather than first-person interviews.
- This means generative AI content is a bit like sliced white bread. It does a job, but it’s generic. In our world of creative services, many (most?) brands are looking for creativity that makes them unique and distinctive. An ‘artisan-baked loaf’.
- That said, generative AI is amazing and we are finding it does offer opportunities right now if used intelligently. For instance, ChatGPT is a bit like Google on steroids. It’s a shortcut to getting answers. But these answers need testing, improving and fashioning into something unique.
- It’s also a really useful way of starting creative thought processes – for example, visual tools like Midjourney and DALL-E.
- A big use case in the short term is generative AI’s ability to make behind-the-scenes processes swifter, using tools such as Otter.Ai and Happy Scribe, allowing creatives more time to focus on actual creativity. You can even change video backgrounds to greenscreens at the click of a button now.
We reckon the summary is ‘handle with care’ while staying in touch with generative AI’s evolution. There are many ethical and legal issues which need to be bottomed out before it will become trustworthy and mainstream. For instance:
- What happens if a government decides to flood the web with manipulative messages? (Eg “coal is good for the planet”)
- Who owns the copyright for images which are being used to train generative AI programs? There are plenty of legal actions already under way.
- Will we get into a loop where AI content is derived from AI content?
- What will happen if people stop visiting the websites that actually generate the content which AI harvests for its answers? The whole ecosystem of the internet is challenged, as is the very source of information that AI depends on.
- Will AI content have to be labelled as such? What will users think?
Overall, as Jane said in her conclusion: “It is hard to see how generative AI will ever be able to replicate human experience which creates unique, emotionally engaging insights.”
And as the FT recently put it: “It is important for users to recognise generative AI models for what they are: mindless, probabilistic bots that have no intelligence, sentience or contextual understanding. They should be viewed as technological tools but should never masquerade as humans.”
We will be watching this space as the generative AI story evolves. If you would like to know more, please get in touch by contacting us at email@example.com.
Stay ahead of the curve
Sign up to our emails