Cyber Magazine September 2022 | Page 21

OTHSCHILD

method being used currently . It works by flooding AI and machine learning ( ML ) systems with incorrectly classified information and files that can change the way these systems identify certain things . This could be as harmless as convincing a computer that cows are dogs by bombarding it with pictures that post information , which poisons the algorithm that ' s busy learning . This poisoning can have a very dangerous domino effect especially if AIs are learning from other AIs .
Separately , there are AI technologies that can create malware capable of mimicking trusted system

“ HAVING PROPER AUTHENTICATION CONTROLS IN PLACE CAN BLOCK PRACTICALLY ANY SORT OF MALICIOUS ACTOR ATTEMPTING TO GAIN ACCESS TO AN AI PROGRAM OR A DATABASE FEEDING INTO AN AI OR ML SYSTEM ”

of dogs labelled ‘ cows ’, or it could be as nefarious as teaching military technology to confuse friendly and enemy combatants in a war zone . Password recognition is another potential AI-based system that can be compromised through this , which can affect authentication for entire platforms and all their users .
Many AIs learn by processing data from largely public places such as Twitter and Facebook . Hackers are able to probe the AI algorithms and essentially reverse engineer them in order to then create malicious bots components . This is to improve stealth attacks . For example , cyber actors use AI-enabled malware programs to automatically learn the computation environment of an organisation , the patch update lifecycle , preferred communication protocols , and times when the systems are least protected . Subsequently , hackers can execute undetectable attacks as they blend with an organisation ’ s security environment . For example , TaskRabbit was hacked , compromising 3.75 million users , yet investigations could not trace the attack . Stealth attacks are
cybermagazine . com 21