91Ƶ

Skip to content

AI systems 91Ƶcan be weaponized,91Ƶ warns top U.S. cyber official

Technology firms urged to bake safeguards into their creations to prevent exploitation
web1_20231211121220-d1577494cbbcee0a70c3d351738ff059980f99e388117e435c9e74e8f530c6e3
People check their phones as AMECA, an AI robot, looks on at the All In artificial intelligence conference Thursday, Sept. 28, 2023, in Montreal. Top cybersecurity officials are urging technology firms to bake safeguards into the futuristic artificial intelligence systems they91Ƶre working on to prevent them from being sabotaged or misused for malicious purposes. THE CANADIAN PRESS/Ryan Remiorz

Top cybersecurity officials are urging technology firms to bake safeguards into the futuristic artificial intelligence systems they91Ƶre cooking up, to prevent them from being sabotaged or misused for malicious purposes.

Without the right guardrails, it will be easier for rogue nations, terrorists and others to exploit rapidly emerging AI systems to commit cyberattacks and even develop biological or chemical weapons, said Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency, known as CISA.

Companies that design and develop AI software must strive to dramatically reduce the number of flaws people can exploit, Easterly said in an interview.

91ƵThese capabilities are incredibly powerful and can be weaponized if they are not created securely.91Ƶ

The Canadian Centre for Cyber Security recently joined CISA and Britain91Ƶs National Cyber Security Centre, as well as 20 international partner organizations, in announcing guidelines for secure AI system development.

AI innovations have the potential to bring many benefits to society, the guideline document says. 91ƵHowever, for the opportunities of AI to be fully realized, it must be developed, deployed and operated in a secure and responsible way.91Ƶ

When it debuted late last year, OpenAI91Ƶs ChatGPT fascinated users with its ability to respond to queries with detailed, if sometimes inaccurate, responses. But it also sparked alarm about possible abuse of the nascent technology.

Security for AI has special dimensions because the systems allow computers to recognize and bring context to patterns in data without rules explicitly programmed by a human, the guidelines note.

AI systems are therefore vulnerable to the phenomenon of adversarial machine learning, which can allow attackers to prompt unauthorized actions or extract sensitive information.

91ƵThere is agreement across the board, among governments and industry, that we need to come together to ensure that these capabilities are developed with safety and security in mind,91Ƶ Easterly said.

91ƵEven as we look to innovate, we need to do it responsibly.91Ƶ

Many things can go wrong if security is not taken into account during design, development or deployment of an AI system, said Sami Khoury, head of Canada91Ƶs Cyber Centre.

In the same interview, Khoury called the initial international commitment to the new guidelines 91Ƶextremely positive.91Ƶ

91ƵI think we need to lead by example, and maybe others will follow later on.91Ƶ

In July, Canada91Ƶs Cyber Centre published advice that flagged AI system vulnerabilities. For instance, someone with ill intent could inject destructive code into the dataset used to train an AI system, skewing the accuracy and quality of the results.

The 91Ƶworst-case scenario91Ƶ would be a malicious actor poisoning a crucial AI system 91Ƶon which we91Ƶve come to rely,91Ƶ causing it to malfunction, Khoury said.

The centre also cautioned that cybercriminals could use the systems to craft so-called spear-phishing attacks more frequently, automatically and with a higher level of sophistication. 91ƵHighly realistic phishing emails or scam messages could lead to identity theft, financial fraud, or other forms of cybercrime.91Ƶ

Skilled perpetrators could also overcome restrictions within AI tools to create malware for use in a targeted cyberattack, the centre warned. Even individuals with 91Ƶlittle or no coding experience can use generative AI to easily write functional malware that could cause a nuisance to a business or organization.91Ƶ

Early this year, as ChatGPT was making headlines, a Canadian Security Intelligence Service briefing note warned of similar dangers. It said the tool could be used 91Ƶto generate malicious code, which could be injected into websites and used to steal information or spread malware.91Ƶ

The Feb. 15 CSIS note, recently released through the Access to Information Act, also said ChatGPT could help generate 91Ƶfake news and reviews, to manipulate public opinion and create misinformation.91Ƶ

OpenAI says it does not allow its tools to be used for illegal activity, disinformation, generation of hateful or violent content, creation of malware, or attempts to generate code designed to disrupt, damage, or gain unauthorized access to a computer system.

The company also forbids use of the tools for activity with a high risk of physical harm, such as weapons development, military operations, or management of critical infrastructure for energy, transportation or water.

READ ALSO:

READ ALSO:





(or

91Ƶ

) document.head.appendChild(flippScript); window.flippxp = window.flippxp || {run: []}; window.flippxp.run.push(function() { window.flippxp.registerSlot("#flipp-ux-slot-ssdaw212", "Black Press Media Standard", 1281409, [312035]); }); }