
- Simple security oversights, like bad hygiene and improper training, are leaving companies vulnerable to AI-driven cyberthreats »
- With growing concerns over data privacy and tighter regulations, companies must prioritize transparency and proactive data protection strategies »
- Experts, including Joe Silva from Spektion, identify trends and share advice they’d give their own clients, from using PETs and zero-trust architectures to monitoring the software supply chain »
A new report by ISACA shows that while AI is increasingly used for privacy tasks, challenges with resource shortages and compliance issues are making privacy roles more stressful.
Another report by ExtraHop points to simple security missteps by employees, such as bad security hygiene and improper training, as the main gateways for attackers to access credentials.
Rob Truesdell of Pangea, a cybersecurity platform for AI applications, echoes this concern, noting that one of the most common vulnerabilities stems from basic mistakes with AI.

“In 2025, we’re seeing a concerning trend where sensitive data exposure through AI isn’t primarily coming from sophisticated attacks — it’s happening through basic oversights in authorization and data access controls,” said Truesdell.
A prime example, Truesdell noted, is that instead of hackers breaking into networks, companies are unintentionally exposing confidential information because they haven’t established clear access rules.
While organizations use AI to build strong defenses — firewalls, threat detection systems, encryption — cybercriminals are using the same technology to break through those barriers. Bad actors leverage AI to automate events like phishing attacks and identify security gaps, and they are trained to evade detection by other AI-powered security tools.
Paul Bischoff of Comparitech also mentioned AI-driven data scraping, which is making personal data easier than ever to exploit. Data scraping is the process of extracting information from websites.
“Data privacy used to be about protecting your private information from hackers, criminals, and data brokers. Now we can add AI to that list,” said Bischoff.
These findings come at a time when trust in data privacy is at an all-time low. According to Forbes, 86% of Americans report growing concerns about data privacy, and 40% don’t trust companies to use their data ethically.
Reframing AI as the Solution, Not the Problem
From everyday applications to building infrastructure, AI/ML is reshaping society — and cybersecurity — as we know it. In fact, the market for AI’s role in cybersecurity is only going to grow, with an expected growth from $24 billion in 2023 to $134 billion by 2030.
As deepfakes, voice cloning, and manipulated images become more prevalent, facial recognition software can prevent the accidental sharing of sensitive information — or even losing $25 million, as one multinational company did after being duped by a deepfake of its CFO.

Regaining Consumer Trust
Consumer trust in organizations is at an all-time low. Even so, Akhil Mittal, senior security consulting manager at Black Duck, said companies that make data privacy a priority can turn the widespread skepticism into a competitive advantage.
“High-profile breaches and stricter regulations like GDPR, CCPA, and emerging AI-related privacy laws are pushing companies to make data privacy a fundamental part of their operations,” said Mittal.
Chris Linnell, associate director of data privacy at Bridewell, emphasized that compliance isn’t just about avoiding fines; it’s about maintaining consumer trust.
“Often, we hear regulatory fines discussed as the main reason to achieve compliance, but what we’re seeing is that losing trust from consumers is one of the biggest impacts of poor data privacy practice, and subsequently one of the biggest drivers for our clients who can demonstrate proactive compliance,” said Linnell.
For some, this comes at a perfect time: Governments are tightening data privacy regulations, pushing businesses to adopt new privacy-enhancing tools (PETs) and better transparency and ethical data practices.

Another way to rebuild trust in AI and data privacy is through open-source software.
While widely used across the tech industry, open-source software has its skeptics due to its publicly available nature. But Dr. Andrew Bolster, senior R&D manager at Black Duck, believes its transparency could be a strength.
“Open-source AI, with its transparency and collective development, often outpaces closed-source alternatives in terms of adaptability and trust,” said Bolster.
Boris Cipot, also of Black Duck, noted that focusing on open-source security is a major trend Black Duck has been seeing, as it helps “secure the usage of OSS dependencies and comply with their licensing obligations.”
As more organizations recognize these advantages, Bolster predicts a major shift toward open-source AI.
The optimism should be kept on a tight leash — for now, at least, according to Joe Silva, CEO of Spektion, a cybersecurity company.
Silva warned the lack of transparency in the broader software supply chain remains a serious concern, even within open-source ecosystems: “We’re rushing forward without robust guardrails in place. That rush is going to create new vulnerabilities, especially given how complex these systems are,” he told us. “These attacks can introduce attack vectors that aren’t as common in commercial software, simply because malicious contributors have easier access.”
Despite the potential of open-source AI, Silva remains cautious. He added: “It’s fascinating to see where the industry is headed, but also worrying when it comes to keeping the door shut on new threats.”
Practical Strategies to Implement
“If you don’t need to store certain data, don’t collect it in the first place.”
That’s what Shrav Mehta, founder and CEO of Secureframe, said. He makes a great point: the more information you collect, the more at risk it becomes.
While protecting sensitive data is challenging enough, Mehta’s approach is straightforward — minimize the data you store to reduce the chances of exposure. It’s a simple yet powerful strategy for keeping your data under control.
Synology, a leader in data storage and backup, is a prime example of a company developing solutions to protect organizations from modern threats. Will Davis, Synology’s VP of Enterprise Sales and Marketing, describes a comprehensive protection plan as “building a fortress,” emphasizing that backups serve as the last line of defense.
“Many businesses only focus on building a wall around their organization to keep intruders out – this is necessary, but what happens when someone inevitably breaks through?” Davis said to us. “Instead of a wall, focus on building a fortress with multiple layers of safeguards, from threat detection to backup and disaster recovery.”
Carlos Aguilar Melchor, chief scientist for cybersecurity at SandboxAQ, also emphasized zero trust architecture (ZTA) and post-quantum cryptography (PQC) as critical components of privacy strategies:
- Zero trust architecture is built on the principle of “never trust, always verify”
- Post-quantum cryptography is specifically designed for encryption risks posed by quantum computing
“The ongoing transition to Post-Quantum Cryptography is crucial to future-proofing encryption against the potential risks posed by quantum computing, ensuring privacy and security in the digital age,” said Aguilar Melchor.
Nick Mistry, CEO of Lineaje, advocates for continuous monitoring of the software supply chain to detect threats proactively with up-to-date Software Bill of Materials (SBOMs).
“Maintaining a comprehensive and up-to-date Software Bill of Materials (SBOM) is critical,” said Mistry. “A detailed SBOM provides full visibility into all components within the software, empowering organizations to verify software integrity and respond quickly in the event of a vulnerability or breach.”
Silva has a few suggestions.
“Deploy the security tools you have, focusing on understanding from an endpoint network and event monitoring and response technology, whether it’s a SIM or a more modern solution like XDR,” Silva said, referring to cybersecurity solutions, Security Information Management (SIM) and Extended Detection Response (XDR).
But don’t go overboard with the tools either. The more you have to manage, the less visibility you’ll have over what’s actually happening.
“Focus on getting more visibility into what software is doing and what it’s interacting with,” Silva said. “I’ve seen too many organizations focus on buying all the tools before they actually even understand the risks they’re managing because they don’t have the right visibility.”
These findings have shown us that while AI is becoming a core part of security operations, experts emphasize that we also have to adjust the way we use the new tools that come with it; because for every door we open, there is an equal vulnerability to consider.