Security researchers can too easily break through ‘guardrails’ instituted in AI software and manipulate the software into ignoring safety restraints
Security researchers can too easily break through ‘guardrails’ instituted in AI software and manipulate the software into ignoring safety restraints
The cyber landscape continues to evolve as major organisations face the aftermath of a crippling cyber attack. With payroll data compromised, attention now turns to the potential targeting of AI vulnerabilities. The battle against cyberattacks seems to have been lost, with vulnerabilities in AI becoming a potential future target, says GlobalData, a data and analytics company.
Principal Analyst, Thematic Intelligence at GlobalData, comments: “The ingenuity behind these attacks is beyond the capability of most enterprises to prevent occurring. They can only take steps to be as resilient as possible. These attacks are tried and tested perhaps more than many realise.”
He adds: “The battle to prevent these sorts of attacks from occurring has already been lost. What is important now is for security specialists – companies, researchers, security vendors, and governments –to put their best efforts into limiting as far as possible the use of artificial intelligence (AI), including generative AI, by hackers for offensive purposes.
“Events this week demonstrated that security researchers can too easily break through so-called guardrails instituted in AI software and manipulate the software into ignoring safety restraints and then revealing private information. If they are not controlled, these vulnerabilities will lead to future AI-driven cyberattacks.”
Principal Analyst, Global Enterprise Cybersecurity Lead at GlobalData, says: “This is a classic case of insufficient risk management posture across company supply chains. Risk management compliance guidelines like NIST go some way to address supply chain cybersecurity risks. However, both user and supplier initiatives around cybersecurity are just not sophisticated enough to drive visibility across the complete supply chain.
“Therefore, it just shows that, even now, with developments in AI and the sheer volume of use cases for it, the question is, is the world moving into a darker place with the potential for adversarial machine learning attacks through vulnerabilities?”
“This incident, in which instead of cybercriminals encrypting data and demanding ransom in exchange for a decryption key, they threaten to publish the information, is one of a steadily increasing stream of similar incidents.
“Prevention is critical. Organisations need to make sure they are running the most current anti-virus software. Another important defense is end-user education. Attackers often use phishing and other social engineering tactics to breach an enterprise.”