IBM uses light for ultra-fast computing in AI systems

(IANS) IBM researchers have developed a way to dramatically reduce latency in Artificial Intelligence (AI) systems by using light, instead of electricity, to create ultra-fast computing.

The IBM team, along with scientists from the universities of Oxford, Muenster and Exeter, achieved this by using photonic integrated circuits that use light instead of electricity for computing.

The light-based tensor core could be used, among other applications, for autonomous vehicles.

In a Nature paper, they have detailed combination of photonic — demonstrating a photonic tensor core that can perform computations with unprecedented, ultra-low latency and compute density.

“Our tensor core runs computations at a processing speed higher than ever before. It performs key computational primitives associated with AI models such as deep neural networks for computer vision in less than a microsecond, with remarkable areal and energy efficiency,” IBM said in a blog post.

In 2015, researchers from Oxford University, the University of Muenster and the University of Exeter developed a photonic phase change memory device that could be written to and read from optically.

The new photonic tensor core can perform a so-called convolution operation in a single time step.

Convolution is a mathematical operation on two functions that outputs a third function expressing how the shape of one is changed by the other.

“We use a measure called TOPS to assess the number of Operations Per Second, in Trillions, that a chip is able to process” said Abu Sebastian from IBM Research.

This is just the beginning.

“We expect that with reasonable scaling assumptions, we can achieve an unprecedented PetaMAC (thousand trillion MAC operations) per second per mm2,” IBM said.

In comparison, the compute density associated with state-of-the-art AI processors is less than 1 TOPS/mm2, meaning less than a trillion operations per second per mm2,” IBM said.

Was it worth reading? Let us know.