New IBM chip will open up AI opportunities for businesses

Telum Processor should mean more AI power for businesses

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

IBM wants a bigger piece of the AI hardware pie and has announced a new processor to do just that.

During the recent Hot Chips conference, the chip giant unveiled the details of its new Telum Processor, designed to bring deep learning inference to enterprise workloads.

According to a follow-up press release, the new chip is designed to help address fraud, in real-time. It’s also IBM’s first chip to come with on-chip acceleration for AI inferencing during a transaction.

This chip has been three years in the making, and IBM believes it’ll find its use in banking, finance, trading, insurance applications, and customer interactions.

It also planning for a Telum-based system for H1 2022.

AI demand

AI demand

The idea behind building such a processor, IBM says, came after market analysis which found 90% of companies want to be able to build and run AI projects wherever their data resides. Telum is designed to do just that, allowing enterprises to conduct high-volume inferencing for real-time sensitive transactions, without invoking off-platform AI solutions that could impact performance.

Telum has an “innovative centralized design,” IBM says, allowing users to use it for fraud detection, loan processing, trade clearing and settlement, anti-money laundering and risk analysis.

Are you a pro? Subscribe to our newsletter

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

The chip comes with eight processor cores and a deep super-scalar out-of-order instruction pipeline, running with more than 5GHz clock frequency, meaning it’s optimized for the demands of heterogeneous enterprise-class workloads, IBM explains.

Telum also has a completely redesigned cache and chip-interconnection infrastructure which provides 32MB cache per core, and can scale up to 32 Telum chips.

The chip was created in partnership withSamsung, which developed the 7nm EUV technology node. It is also the first chip with technology created by the IBM Research AI Hardware Center.

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.

New fanless cooling technology enhances energy efficiency for AI workloads by achieving a 90% reduction in cooling power consumption

Samsung plans record-breaking 400-layer NAND chip that could be key to breaking 200TB barrier for ultra large capacity AI hyperscaler SSDs

Anker Nebula Mars 3 review: A powerful and truly portable projector