Cyber security remains a major concern for most organisations and the rapid advancements we’ve seen in generative AI this year haven’t made the ongoing challenge of protecting assets and data any easier.
The most common way that cybercriminals target housing providers is through phishing attacks. Generative AI makes it easier and faster to create convincing fake videos, voice messages and emails, and low-skilled criminals can now get their hands on advanced tools to help them trick housing staff and their third-party suppliers.
Accidental data loss
There’s also the risk of accidental data loss from staff themselves. ChatGPT and its AI peers are a cause for concern because AI-generated chatbots present a significant security risk. Put simply, staff may be tempted to enter confidential data into a chatbot to help them complete their work faster. But they, and the organisation they work for, will have absolutely no idea what the chatbot will do with the data.
AI works by harvesting everything it is shown and using it for a later date. It becomes better informed, and therefore more useful, every time someone gives it data. So, if one of your employees puts sensitive data into a chatbot, it might resurface anywhere else in the world at a later date for another individual using that chatbot. AI doesn’t respect confidentiality unless the rules are built into the system.
While this may at first cause a major headache, these tools simply add another risk. And as the old adage goes, necessity is the mother of invention. Many of the major technology players have been working on solutions to this problem; while AI has leapt forward, so have advances in tools to detect AI and protect users against it.
Business and reputational risks
As we see ever-more AI technologies being developed, launched and licensed, the problem will proliferate, and some of them will inevitably be controlled by powers who are less benign than OpenAI (the creators of ChatGPT) seem to be.
There are very significant business and reputational risks presented by the entire world of AI, and housing providers should act now to protect themselves against these risks.
Game-changing solutions
At Quorum Cyber, we believe that while developments in AI tools might aid adversaries in the short term, the newest wave of tools will actually help security professionals to level the playing field in the longer term.
Our current focus is on comprehensively evaluating a global technology leader’s latest AI product to ensure that, once available, it will be optimally used by our customers.
Education is extremely important to help housing providers’ staff to use AI tools internally to protect their IT estates and their data, reducing their dependence on public AI models such as ChatGPT or Claude. In the near future, an AI-empowered workforce will be a major asset in the fight against cybercrime and accidental data losses.
Graham Hosking is the solutions director for data security and AI at Quorum Cyber.