Share

Data Privacy Day: AI has put data privacy top of mind

But concerns grow about potential misuse of sensitive information
Data Privacy Day: AI has put data privacy top of mind
Data Privacy Day serves as an important reminder for businesses to give more value to privacy and data protection

As we enter 2024, the surge of AI innovation continues to transform industries, redefining the way businesses think and operate. Fueled by the data economy, the integration of AI into countless operational tasks has allowed businesses to make faster decisions. This has also enabled organizations to further automate processes or to better predict behavior with remarkable efficiency.

Generative AI systems are being used, directly and indirectly, to support employees in notation, chatbot, and routine content generation tasks. Companies within the Middle East have made significant investments and commitments to AI research and development. These have positioned the region at the forefront of technological innovation.

Sensitive information concerns

However, as AI companies enter a race to allow their services to collect as much data as possible to pour into training AI models, there are concerns about the potential misuse of sensitive information and the erosion of privacy. For businesses, the rise of AI has raised inevitable questions around data privacy. It has made clear the need to safeguard sensitive information, prompting businesses to reassess and fortify their data protection strategies.

Regulators within the region are aware of these risks. They are constantly seeking to strike a balance between fostering innovation and ensuring data privacy. Looking to ensure safety and protection, while also allowing for innovative development, the UAE has introduced industry-specific guidelines, addressing specific nuances instead of relying on overarching regulations. A great example is the Abu Dhabi Department of Health’s Policy on the Use of AI in the Healthcare Sector.

As regulators rush to catch up with the pace of innovation, data privacy is on the agenda for 2024. Businesses must start the year on the front foot to ensure they are not exposing sensitive or proprietary data to unnecessary risk via third party AI apps and integrations. Companies that do not take these measures put themselves at risk of serious breaches and non-compliance.

Unfortunately, apps such as ChatGPT are often mis-used by employees. They knowingly or unknowingly upload personal data, highly sensitive proprietary source code, or even sensitive financial information into the platform. In fact, Netskope’s Threat Labs researchers discovered that source code is posted to ChatGPT more than any other type of sensitive data in the workplace. This is being done at a rate of 158 incidents per 10,000 users per month. So how do we manage this behaviour and minimise risk?

Data protection

Fortunately, many data protection issues caused by AI use can also be solved with the help of AI. Here are four ways to ensure data is protected by mobilising AI within a business’s security posture.

1. Sensitive data categorization

The first step to prioritising data privacy is being able to decipher what information requires protection. Data categorization is a key stage in this process, and it is an extremely time consuming task if performed manually. Fortunately, AI and machine learning can be harnessed for automated categorization of data across department systems. For example, AI and ML can be used to scan images and identify personal and financial data. They can also be used to secure passes, or even contracts that include sensitive terms. They can then be categorized appropriately for data protection policy handling. Once data is categorized, it can be managed and protected by security controls with greater efficiency.

Read: PwC: 73 percent of regional CEOs confident in GenAI’s game-changing influence on business landscape

2. Employee training

By advocating for greater AI awareness training in the workplace, security leaders can help defend their organizations against data loss and protect critical information. However, if employees are subject to once-yearly training sessions on the subject, as is unfortunately often the case, knowledge is unlikely to be retained for use at the crucial moment. For the best success, leaders should look to install real-time coaching technology (again, powered by AI) to remind employees of company policy. An example of this would be the associated risks of uploading sensitive data to AI apps. If necessary, security teams could make further interventions. They can direct employees to alternative approved applications or block access entirely.

3. Data loss prevention

Data loss prevention tools can enlist AI to detect files, posts, or emails containing potentially sensitive information, such as source code, passwords or intellectual property, and alert and block when these assets are leaving the organisation in real time. In the current generative AI hype, it’s a highly useful way of ensuring that posts containing sensitive information are not uploaded to third-party platforms that have no prior approval. This can be achieved by working in conjunction with real-time employee coaching, ensuring that employees are alerted of potential data misuse as it happens, e.g. as soon as sensitive information is detected.

4. Threat detection

Another essential way to protect data is to ensure AI is enlisted to monitor and detect threats such as malware and ransomware whilst also reducing the attack surface. For large enterprises, AI-powered device intelligence along with borderless SD-WAN can proactively monitor the network. It also provides predictive insights for teams, preventing network issues before they happen. AI can help detect and flag unusual behavior as part of a zero trust approach. Using AI, network and security teams can automatically detect access from an unusual device or location.

To conclude, businesses are fast evolving thanks to the introduction of AI into their organizations and shifting operations to be more streamlined and efficient. However, privacy and data protection remain critically important. They help ensure organizations can continue to safely utilize these advancements in technology, whilst remaining compliant with existing and newly proposed regulations. Data privacy is more than just a single day…it’s every day regardless if you are human or AI-powered.

data privacy
Neil Thacker is chief information security officer, EMEA at Netskope

About Neil Thacker

Neil Thacker is a veteran information security professional and a data protection and privacy expert. He holds over 25 years of experience within the information security industry. He has been recognized by his peers as a leader in the industry including being selected in the CSO30 for 2022, shortlisted for an unsung hero award (CISO Supremo category) and has been awarded MVP in consecutive years (2021 & 2022) by his Netskope peers.

Neil is advisory board member to the Cloud Security Alliance (CSA) and former advisor to ENISA (EU agency for Cybersecurity. Neil is also co-founder and board member to the Security Advisor Alliance (SAA). It is a non-profit organization focused on promoting the industry to the next generation. It also helps ensure that students, teachers, and schools have the resources and mentorship necessary to foster the cybersecurity professionals of the future.

For more op-eds, click here.

Disclaimer: Opinions conveyed in this article are solely those of the author. The information presented in this article is intended for informational purposes only. It does not constitute advice on tax and legal matters; neither are they financial or investment recommendations. Refer to our full disclaimer policy here.