ChatGPT banned over data breach concerns: is Italy setting a precedent?

6 min read | Tim Olsen | Article | Workforce management | Industry insights

ChatGPT banned over data breach

ChatGPT reached 100 million users a mere two months after launching last year, and has assisted in everything from itinerary planning to software development. It even saved a dog’s life. However, a temporary ban imposed by Italy in March has many wondering how far the technology can be trusted.

  • Italy became the first Western country to impose a ban on ChatGPT following data security concerns – will the rest of Europe follow?
  • According to our Global Cyber Security Report 2023, 80% of leaders say their organisation experienced a phishing attack in 2022 – artificial intelligence (AI) could aggravate these threats.
  • Industry leaders have called for a pause in AI development, with fears that the aggressive pushing of AI products poses a major risk to society.


Is Italy’s ChatGPT ban just the beginning?

Generative AI tools like ChatGPT dredge enormous amounts of data to form their responses, and there are concerns that some of this information may come from personal sources.

On March 31, Italy became the first Western nation to ban ChatGPT, albeit on a temporary basis. The Italian data protection authority reported privacy concerns related to OpenAI’s advanced chatbot, specifically to a breach of data involving user conversations and payment information. The Italian watchdog said it would also investigate whether ChatGPT complied with the General Data Protection Regulation, an important element of the EU’s privacy law framework.

So, will Italy’s ban prompt other nations to do the same? Reports suggest that other European countries haven’t ruled out the option, although there are no signs so far of a widespread ban on OpenAI’s chatbot. And while the UK government has set out plans to regulate AI with new guidelines on responsible use, policymakers appear reluctant to stifle AI-enabled growth with a patchwork of confusing legislation.

There’s no denying though that Italy’s ban – knee-jerk reaction or not – has brought ChatGPT’s data security considerations into sharper focus. And the potential risks are only likely to grow.


AI could heighten cybersecurity threats

Whether or not other countries follow Italy’s example, it’s clear that organisations and governments alike shouldn’t rule out the inherent risks associated with ChatGPT and other AI models – not least, AI-powered cyberattacks.

Our latest Cyber Security Report revealed that 80% of leaders say their organisation experienced a phishing attack in 2022. Already emboldened by distributed workforces and ongoing skills shortages, the advent of advanced AI models offers threat actors even more opportunities to breach an organisation’s data. While AI tools can bolster cyber defence mechanisms – improving threat detection, increasing response times, and harnessing continuous learning – these traits can equally be leveraged by cybercriminals. As a result, it’s most important than ever to have the skills and support of adaptable cyber professionals to shore up an organisation’s defences.

Cyber threats and data breaches aren’t the only risks though; the spread of misinformation and the proliferation of fake news will only add uncertainty and fuel global tensions. However, some experts fear a far more dystopian future if AI is left unchecked.


Experts claim a pause is needed – or a complete shutdown

Artificial intelligence experts and tech leaders – including OpenAI co-founder, Elon Musk, have urged a six-month pause on the creation of “giant” AIs. In an open letter signed by thousands, ranging from academics to engineers, the letter warns of developing AI systems more powerful than the recently launched GPT-4, highlighting the risk posed to society by advanced AI. Whether this is an earnest plea or a catch-up ploy is hard to tell, but it’s apparent that many are uncomfortable with AI’s dizzying progress.

Some experts claim that a pause is not enough, asserting that the acceleration of AI-powered tech demands a complete shutdown. Along with spreading misinformation and breaching data privacy, there is a fear that AI models could become more competent than humans, blurring the lines between master and servant.

At this stage, trying to stop AI could be futile – it’s already heavily engrained in our society and collective psyche, plus its potential to power-up productivity and close the digital skills gap is hard to pass up. Politicians and business leaders alike may move quickly to align laws with emerging AI, but the technology will always outpace them. While guardrails and governance frameworks are still important, the best response may be to invest in people; those who can adapt to a rapidly changing future.

Download our first ever global Cyber Security Report to inform your cyber defence strategies, secure the talent you need, and code in long-term resilience.

 

About this author

Tim Olsen

Making Intelligent Automation scale, consultant, futurist, influencer and speaker.

articleId- 58090568, groupId- 20151