
Key Takeaways
- This year’s RSA Conference made it clear: The question isn’t whether you’re using agentic AI, but how responsibly you’re integrating it into your platforms.
- Hosting providers should expect rising demand for secure environments, powered by agentic AI, human oversight, or both.
- Trust and transparency continue to be competitive advantages. Clients want proof you’re leveraging secure, AI-ready infrastructure without any shortcuts.
One of the world’s largest cybersecurity events brought together hundreds of industry experts to explore the latest trends in cybersecurity.
Last year, concerns at the RSA Conference flooded around agentic AI. But the tone has changed this year. Agentic AI was still a central topic, but this time, the conversation at the conference in San Francisco was around how security leaders, cloud platforms, and hosting providers can integrate it safely and scalably.
We spoke with attendees of this year’s conference to hear their biggest takeaways.
From autonomous AI agents to growing browser-based threats, these expert insights have implications for providers of all types.
The AI Security Shift
The biggest takeaway from RSA 2025, held from April 28 to May 1, showed a clear shift in focus toward practical implementation and building trust with agentic AI.
Arti Raman, founder and CEO of Portal26, described the shift as going from “let’s learn all about this scary new threat surface” to “it’s time to evaluate and choose the right partners to implement a secure, responsible, and productive genAI program.”
In fact, it may have evolved so much that simply advertising you’re using AI isn’t enough of a consumer draw anymore.
“While the hype still abounds, we believe more people are understanding that just putting ‘now with AI’ on a product or service doesn’t mean it’s better,” Raman said.
For hosting companies and platform providers, that means customers are no longer just asking if your service supports AI. They want to know how you’re doing it the right way.
Rick Caccia, CEO of WitnessAI, agrees with the sentiment: “CISOs want to enable, not prevent, AI adoption and are looking for solutions that help them do so.”
Web hosting providers have an opportunity to benefit from this, mainly by adding value-based services like management with trusted AI risk management platforms.
But as trust (and hallucinations) remains a top concern, Raman has a suggestion.
“More organizations are now talking about building their internal LLMs so they have more confidence in the models,” she said. “Everyone has similar tools — the difference maker is how your employees leverage them.”
“Agentic” Isn’t a Buzzword
The consensus is in. Yes — agentic AI is powerful in many ways, but it still needs strong oversight.
Amir Kazemi of Cycode noted that the rise of AI-driven engineering has widened the gap between developers and security professionals.
“The rise of AI in engineering significantly accelerates development, shifting the AppSec-to-developer ratio from the traditional 1:100 to 1:1000. This places an unsustainable strain on security teams,” said Kazemi.
The translation is: AI helps developers move faster, but it also gives security teams way more to manage.
For hosting providers, this presents both opportunity and risk. Could they offer agentic AI “security teammates” as part of a premium managed service offering?
If so, they would require deep access to environments, including logs, system processes, and customer data.
Yoav Regev, CEO of Sentra, also shared his observations, noting that although agentic AI is becoming more popular, we’re still early in understanding the risks.
“The autonomy of agentic AI introduces new challenges — allowing systems to access, process, or share sensitive data without sufficient oversight,” Regev added.
Organizations already struggling with “shadow data” — information stored outside their main systems — may see those risks grow even more.
Shadow data often emerges from outdated legacy systems or when migrating to a new tool/platform. Often, data is created or stored without proper oversight or visibility.
“The conference theme, ‘Many Voices. One Community,’ is more than a slogan,” said Regev. “It’s a reminder that securing this next wave of AI will require collaboration across the security ecosystem. The innovation is real, but so is the complexity.”
RSA 2025 made clear that AI is not a magic bullet for all security challenges, said Joe Silva, CEO of Spektion.
“Most of the security experts I spoke to agreed that AI’s not actually the answer to our vulnerability management woes,” said Silva.
He added: “Agentic approaches are most effective and more often scale processes where capability isn’t a limiting factor, meaning there’s not a place for AI everywhere just yet.”
AI and Security
Jawahar Sivasankaran, president of Cyware, said many organizations are still struggling to act on it quickly and efficiently.
And the unfortunate truth is that hosting providers often serve as the front line for security issues.
“The conversations at RSA this year have made one thing clear — the gap between intel and action is still one of the biggest challenges in cybersecurity,” said Sivasankaran.
Similarly, Silva also noticed that security teams are under increasing pressure to work faster.
He specifically noted that they’re prioritizing threats quickly, collaborating across silos, and automating responses wherever they can.
But he emphasized that a major roadblock is the gap in visibility and data context between different parts of the security stack.
He likened it to judging a car’s safety by looking at it parked instead of giving it a test drive.
“The gap between these two sides of the security house isn’t about analyst skill or AI capabilities. It’s about visibility,” he said.
Silva emphasized: “Close that gap, and everything else follows.”
Sivasankaran added that staying ahead means defenders can’t operate in silos anymore.
“It takes shared intelligence, smarter coordination, and action-ready workflows to move at the speed of today’s threats.”
In other words, don’t cut corners. And if you do, people will eventually begin to notice.
Browser-Based Attacks Are on the Rise
Outside of AI, one trend that has gained traction is the rise of the browser as a top attack vector.
With remote work, SaaS adoption, and BYOD environments becoming the norm, data is increasingly flowing through browser sessions, Jay Martin, CISO at Blue Mantis, said.
“Browser-based threat vectors such as malware-less phishing, session hijacking, rogue addons and extensions, and more are rising,” said Martin.
For cloud hosts, this may affect the growing base of customers running browser-accessible tools, like webmail, CMS dashboards, or file management UIs.
“We’re shifting from the core issue of not having enough tools to how to integrate our security stack, prioritize threats, and scale effectively,” Martin said.
Martin’s advice?
“Security leaders must now focus on leveraging AI responsibly, securing browsers and digital workspaces, embedding security into development, and demanding measurable outcomes from security vendors and partners.”
Another answer may be to keep the human in the loop.
Paul Dyer from HackerOne said something we at HostingAdvice have constantly echoed:
We’re entering an era where AI won’t replace human ingenuity but instead enhance it.
“From AI-assisted reconnaissance to automated payload testing, security researchers already use hackbots to scale their impact, sharpen their focus, and uncover vulnerabilities faster than ever,” said Dyer.
For web hosts, could this point to a new service layer? Is the next step to offer environments that are optimized for hybrid workflows, combining automated software with human pentesters?
It should be, said Dyer: “Today’s most effective researchers aren’t choosing between AI and human skill; they combine both.”
“It takes adversarial creativity augmented by automation and rooted in real-world expertise to challenge systems before bad actors do.”
Final Thoughts
RSA 2025 apparently felt like a turning point for the cybersecurity industry.
And for the hosting world, it’s clear they’re part of the security stack now. Clients are expecting safe, AI-ready environments without compromise. Partners expect responsible data handling without risk.
As Raman said, “At the end of the day, whatever you are doing has to solve a real problem, regardless of AI being part of how you do it.”