Cyber Magazine August 2024 | Page 96

OPERATIONS
With so much good on offer , it may be hard to see what the , if any , downsides are . While not a drawback if addressed , two issues that could grow to become a concern : a novel risk that demands careful consideration and mitigation strategies when it comes to data .
“ We can divide the challenges of AI integration into two camps : general data concerns and specific concerns ,” says Colin Selfridge , Director Consulting Expert , Cyber Security , CGI UK & Australia . “ General data concerns typically include data misuse and data loss ( or theft ), while specific concerns surround issues with the accuracy of AI tools .”
Distributing data Public clouds are the most common type of cloud computing deployment , according to Microsoft . This is because they are a cheaper way to access the processing , flexibility and scalability offered by cloud computing .
This , therefore , makes them an attractive way to utilise the benefits of Gen AI , without the price tag of having to implement your own software and hardware . A 2023 study by software engineering company Binmile revealed 24 % of Small and Medium Business ( SMBs ) were fully committed to a single public cloud platform for their business operations .
Here lies the beginning of the problems : “ Adopting cloud technology and Gen AI has significantly reduced visibility into where data is stored , who can access it , and how it is protected ,” says Martin Borrett Technical Director ,

534,465

The average company has 534,465 files containing sensitive data , according to software company Varonis .
IBM Security . “ While Gen AI offers substantial value , it also introduces a new attack surface that requires protection .”
Utilising Gen AI models , organisations are distributing and decentralising their data across multiple systems and environments , which , especially if using a public AI model that learns from input data , make it harder to follow regulatory requirements and even keep it safe .
“ Injecting hostile content into large Language models used by AI is a welldocumented example . Indeed , with large language models , redundant code could present a possible future vulnerability , or result in a data leak ,” Colin explains .
Plus , by using these third-party services , it creates new potential openings for data breaches , which you do not have full control over , expanding a given organisation ’ s attack surface .
“ It ’ s worth considering hosting options , can it be installed on-premise or within company-controlled environments , rather than vendor multi-tenant hosting spaces ?,” says Colin . “ With the AI ‘ writing home ’ to ensure an efficient learning mechanism – does this represent an egress path for sensitive information ?”
96 August 2024