Designed for the Needs of Machine Intelligence: Graphcore’s New Processor Now Powers Some of the World’s First IPU Servers

Graphcore Revolutionizes Machine Intelligence Compute

TL; DR: The UK-based semiconductor company Graphcore has developed a new type of processor purpose-built to speed up artificial intelligence and machine learning applications. The processor, now powering early IPU servers worldwide, makes it possible to create next-gen machine intelligence innovations in a highly scalable manner. Transparency is an ongoing focus for the company, which provides publicly available results from its own benchmarking activities across a wide range of training and interference models.

In August 2021, Google Brain Research Scholar Sara Hooker published “The Hardware Lottery,” an essay on how existing hardware and software innovations largely dictate the direction of research ideas.

The author posits that research ideas are often deemed successful not because they are superior to others — but because they are compatible with currently available software and hardware. In that way, delays in software and hardware advancement have the potential to subsequently delay research progress.

Chris Tunsley, Director of Product Marketing at Graphcore, said his company aims to help innovators successfully push such boundaries by expanding the processor ecosystem. The company’s new Intelligence Processing Unit (IPU) is purpose-built to speed up artificial intelligence (AI) and machine learning (ML) applications.

Graphcore logo
Graphcore’s innovative IPU chip is currently powering IPU servers worldwide.

“‘The Hardware Lottery’” gave us the idea that if you only have a GPU, areas of innovation are going to be limited to GPU-friendly technology,” said Chris Tunsley, Director of Product Marketing at Graphcore. “But there are very fruitful areas of innovation that aren’t necessarily suited to a GPU or CPU.”

Technology like Graphccore’s IPU ultimately accelerates industry progress by unlocking innovation in AI and ML. Chris says he’s seen it firsthand in financial services, where professionals use Markov Chain Monte Carlo (MCMC) algorithms when estimating continuous-time asset pricing models.

“We’ve seen customers who wanted to use MCMC for alpha estimations in stock predictions but found that GPUs aren’t great for use with the model because they take too long,” he said. “Suddenly, with an IPU, they can complete tasks they couldn’t otherwise.”

Addressing Gaps in the AI Industry Since 2012

Graphcore has built a reputation for advancing artificial intelligence since Founders Nigel Toon and Simon Knowles met at a pub to brainstorm an idea for an AI chip nearly a decade ago in Bath, England. The city, located in the county of Somerset, England, was named after its historical, Roman-built baths. Amid this setting, Graphcore’s founders were on a mission that must have seemed exceedingly future-forward.

“The company was founded to provide an industry solution — a chip — designed from the ground up to address the unsupported ML and AI workloads,” Chris said. “The CPU and the GPU are super capable, but they are legacy processors that were designed for a different purpose. We came up with a processor architecture that’s fundamentally different.”

It’s all packaged in a system known as the IPU Machine (IPU-M2000), a compute engine for machine intelligence running on an IPU. The IPU system connects easily with today’s datacenters, allowing for the supercharged, scalable communication the AI industry demands.

“The fundamental building block for delivering the processor is what we call the IPU Machine, which plugs into a datacenter through standard Ethernet capability,” Chris said. “It’s not a proprietary plugin nightmare.”

The IPU Machine, a 1U compute platform for AI infrastructure, is scalable up to a 64K-IPU configuration. Rather than working as a doorstop, the platform serves as the building block for an evolving, easily reconfigured infrastructure.

“It’s an easy-to-engage platform where everything is taken care of,” Chris said. “It’s got the server, the compute, the capability. Four IPUM 2000s can easily be reconfigured into what’s called a POD64, which becomes a building block for scaling out.”

Working to Become the Global Standard in Machine Intelligence

Over the years, the Graphcore brand name has almost become synonymous with the IPU chip. But while the chip remains at the heart of the company, the organization’s core value proposition comes down to outfitting datacenters with power, efficient, and cost-effective AI computing.

“We’re still very proud of the chip and what it can do — but actually, it’s so much more than that,” Chris said. “A datacenter manager is not going to use a chip; he needs to know what to do with it, how you plug it in, and how you scale things out. We have differentiators at each stage of the scale-out story.”

Graphcore’s IPU was specifically designed for AI with the foresight to navigate future industry milestones. “The great thing with the IPU is that baked into the architecture from the beginning is this sort of divergence,” he said. “It’s not because there’s a new iteration of the chip, necessarily, but just because the architecture favors the direction of travel in AI.”

The use cases for AI span multiple industries. By strategically leveraging IPU, innovators from the healthcare, finance, scientific research, and telecommunications worlds will continue to benefit from Graphcore’s IPU.

Take healthcare, for example:

“AI is enabling vital new directions for medical research,” the Graphcore website states. “Intelligent algorithms are driving innovation in drug discovery, precision medicine, and medical imaging. By harnessing the IPU, healthcare innovators can take full advantage of these new approaches.”

With the IPU, research scientists can run next-generation experiment models with never-before-seen speed and accuracy. This ultimately helps them achieve scientific breakthroughs that can potentially save lives.

Cloud Computing Options and Performance Benchmarks

The adoption of cloud servers and computing has accelerated dramatically in the past few years — and Graphcore is delivering in lockstep with that transition. Case in point: Customers can now run AI workloads on IPUs in the cloud via Graphcore’s Graphcloud.

“On the Graphcloud side, we deliver these platforms through the IPU POD16and the IPU POD64,” Chris said. “Beyond that, we can serve on-premise customers through our partner program of distributors and resellers.”

Graphcore delivers its hardware in a cloud server environment through a partnership with Cirrascale.

“Through Cirrascale, we provide the highest levels of capability and security,” Chris said. “It’s a really nice complement for people to start getting access on-prem. Others have a go on the cloud. These two channels are quite important.”

Graphcloud servers
Graphcloud provides IPU technology in a cloud server environment.

Performance-wise, Graphcore takes a transparent approach, with benchmarks listed clearly on the company’s website for use in comparing current-generation models.

“It’s a balance because we need to show competitiveness on today’s models — and our benchmarks show we are super competitive in that area,” Chris said. “But our architecture is suited to where the industry needs to go in the future in terms of highly efficient, fine-grained computations that need to be done with the multiple instruction, multiple data (MIMD) type of parallel architecture. And some of our benchmarks demonstrate that.”

Chris noted that the IPU-based revolution on the research side somewhat mirrors advancements on the consumer side. And both result in dramatically improved performance.

“About a year ago, Apple came along with the Apple M1, this completely new custom silicon,” he said. “There was a degree of skepticism at first, but it has blown people away with its performance capability, its battery life. Apple is obviously in a very different space, serving a different sort of processor needs. Still, this example illustrates the potential of designing silicon fit for purpose, rather than trying to modify and optimize existing technology. And that’s just what we’ve done.”

Advertiser Disclosure

HostingAdvice.com is a free online resource that offers valuable content and comparison services to users. To keep this resource 100% free, we receive compensation from many of the offers listed on the site. Along with key review factors, this compensation may impact how and where products appear across the site (including, for example, the order in which they appear). HostingAdvice.com does not include the entire universe of available offers. Editorial opinions expressed on the site are strictly our own and are not provided, endorsed, or approved by advertisers.

Our Editorial Review Policy

Our site is committed to publishing independent, accurate content guided by strict editorial guidelines. Before articles and reviews are published on our site, they undergo a thorough review process performed by a team of independent editors and subject-matter experts to ensure the content’s accuracy, timeliness, and impartiality. Our editorial team is separate and independent of our site’s advertisers, and the opinions they express on our site are their own. To read more about our team members and their editorial backgrounds, please visit our site’s About page.