
- New AI safety report urges action from policymakers to manage risks of rapid AI evolution
- Experts, including Andy Syrewicze, Dane Sherrets, Kiran Chinnagangannagari, and Nick Mistry, discuss the growing risks and need for oversight
Most know AI is evolving fast, but a new report from more than 100 experts warns it’s moving too fast for policymakers to keep up. The “International AI Safety Report” is an independent report that highlights the risks and impact of advanced AI while offering guidance on how (and why) governments should respond.
Its findings aren’t surprising: Since the ChatGPT heyday, AI has been rapidly improving in scientific reasoning, programming, and long-duration tasks. There’s also a clear shift toward autonomous AI agents that need less and less human oversight.
But these experts are divided on the pace. Some see a gradual climb, while others warn of a surge that could unintentionally bypass typical safety measures.
Regardless, experts do agree that its rapid advancement brings serious economic, geopolitical, and societal risks. Whether it’s malicious, accidental, or a matter of training issues (i.e. hallucinations and biases), they worry about whether AI can continue accelerating at a safe and secure scale.
Ai’s Growing Pains and Risks We Can’t Ignore
These risks aren’t just hypothetical. Many are playing out in real time.
Andy Syrewicze, Security Evangelist at Hornetsecurity, told us he’s seen this all too often: While the use of gen-AI helps businesses, it’s also making cybercriminals smarter.
“While AI can help with business efficiencies, it can also be used maliciously,” said Syrewicze. “This is most notably seen in the uptick of AI-generated sophisticated phishing attacks that can bypass traditional security measures.”
The report highlights four major malicious uses of AI: cyberattacks, fake content creation, manipulation of public opinion, and weaponization.
“With these advancements, cybersecurity risks are a major concern with AI systems being used for malicious purposes, such as generating harmful content or enabling cyberattacks,” said Kiran Chinnagangannagari, chief product and technology officer at Securin.

Rohit Dhamankar, the vice president of product strategy at Fortra, predicts that deepfakes are becoming so convincing that multifactor security methods (i.e. voice authentication or facial recognition) may become useless.
“Adversaries are already using it to create better tools, automation, and malware, even poisoning AI systems by targeting training data or underlying model parameters,” said Dhamankar.
Security measures must evolve just as quickly. But maybe there’s a light at the end of the tunnel.
Dane Sherrets, the staff solutions architect at HackerOne, told us that security researchers have successfully used good AI to combat bad AI. He’s seen this firsthand at HackerOne, a cybersecurity company that connects organizations with ethical hackers to identify and fix security vulnerabilities.
They’re called hackbots — and while they’re generally AI agents that automatically hack sites or systems, cybersecurity experts use the same technology against attackers.
“Some have even helped uncover zero-day threats, stopping potential exploits before they become full-blown attacks,” explained Sherrets.

Sherrets said the operators of the hackbot, XBOW, quickly became top bounty earners on HackerOne by reporting critical vulnerabilities.
Beyond cybersecurity, AI’s impact is felt on a global scale.
As large companies in high-income countries (HICs) with strong digital infrastructure — like the U.S., the UK, Germany, France, and Japan — lead AI development, it could worsen global inequality.
For example, the report highlights that, in 2023, the U.S. developed 56% of the major general-purpose AI models. This only widens the gap between rich and poor nations.
If wealthier countries lead the way in AI development and infrastructure, poorer countries may have no choice but to rely on expensive, foreign-made technologies. In many ways, this could start to resemble a new form of colonialism.
But researchers suggest if developing economies adopt AI effectively, it could boost productivity for skilled workers and create remote work opportunities.
Open-Source AI Is Innovative (with a Side of Risk)
Attempts to close the AI research and development (R&D) gap haven’t been successful so far.
Not to say that there haven’t been efforts in democratizing access to this type of technology; it’s that these initiatives will need substantial financial investment and a lot of time to take effect. Some organizations are pushing forward to address those issues, including the U.S. government.
The Defense Advanced Research Projects Agency’s AI Cyber Challenge (AIxCC) is a two-year competition aimed at getting U.S. cybersecurity pros to build AI tools that can protect critical infrastructure and open-source software (OSS). The prize is $18.5 million and winners will be announced in August 2025.
AI has already proven it can find and exploit security flaws faster than human teams can. Google’s OSS-Fuzz, for example, used AI to discover 26 vulnerabilities in open-source projects, including a major flaw in OpenSSL that had been missed for 20 years.
Nick Mistry, CISO and SVP at Lineaje, a supply chain cybersecurity company, warns that as AI-generated code becomes more common in OSS projects, transparency and security should be carefully managed.

Take Meta’s LLaMA model, for example. Its public availability allows developers worldwide to build on the technology. At the same time, this transparency also brings security risks: When powerful AI systems are accessible to anyone, misuse rises — from creating fake news to developing automated hacking tools or launching cyberattacks.
It’s dangerously close to becoming a cyber Wild West.
But as Mistry points out, “The trade-off between transparency and security must be carefully managed, but the benefits — faster innovation, and AI-driven enhancements — often outweigh the risks.”
Slawomir Ligier, the vice president of product management at Protegrity, also mentioned the broader impact of open-source contributions, specifically citing Tinder’s recent release of an open-source tool on GitHub as an example.
“Tinder is one of many companies contributing their code to the open-source community. Their latest contributions are focused on the development of scalable mobile applications. Contributions to the open-source can benefit all participants,” Ligier explained. “Companies can tap into a global pool of talent and ideas, leading to innovative solutions and improvements that might not have been developed internally.”
This holds true in AI as well. Open-source AI models have evolved at a pace comparable to — and in some cases surpassing — their counterparts. The transparency of open-source AI allows the broader community to identify flaws that might otherwise go unnoticed, said Ligier.
Final Thoughts on Balancing Progress and Precaution
None of this is intended to scare companies or the general public about AI. It’s as Syrewicze said, “[generative AI] technology can also be used as a force for good: powering best-in-class threat detection and response, filtering AI-generated emails and speeding up incident responses.”
In fact, Rick Caccia, CEO of WitnessAI, says the fear may be an overly cautious misconception — what really matters is ensuring AI is used appropriately.
“If a company knows that their employees can use AI safely, they will be more willing to adopt the technology. If decision-makers instead choose to lock down and prevent any AI usage out of security concerns, their competitors who do choose to leverage AI will have a competitive advantage,” said Caccia.

Sherrets is inclined to agree.
“AI isn’t replacing human expertise yet. AI still lacks the creativity and context that security teams and researchers need,” he said. “How we approach adopting and operating this powerful technology will determine if it becomes an opportunity or risk.”
AI is already making significant strides in transforming industries and boosting productivity. And, looking ahead, AI is likely to help revolutionize multiple sectors.
“With AI agents’ revolution, the potential for automation and smart systems is on the rise,” said Chinnagangannagari.
We can look at the future of AI with optimism without forgetting that its rapid evolution demands oversight. So while we may put AI in the driver’s seat, it’s clear we still need someone in the passenger seat with a map.