Das recognized as outstanding researcher for work on computational caches

Computational caches are an emerging technology based on the use of a processors cache space to perform computations.

Prof. Reetuparna Das Enlarge
Prof. Reetuparna Das.

Prof. Reetuparna Das received a 2020 Intel Outstanding Researcher Award for her work on computational caches. This annual award recognizes exceptional contributions made through Intel-sponsored research, with consideration given to fundamental insights, industrial relevance, technical difficulty, communications, and potential student hiring associated with a candidate’s research program.

Computational caches are an emerging technology based on the use of a processor’s cache space to perform computations. Over two-thirds of a processor (specifically its caches) and all of a computer’s main memory are devoted to temporary storage. Today, none of these memory elements can perform computations. In the face of slowing processor progress as Moore’s Law comes to an end, hardware researchers are seeking as many avenues as possible to boost performance.

Das’s research addresses this fundamental question, and proposes to impose a dual responsibility on memory: the storage and computation of data. Das and her collaborators have proposed techniques that can repurpose cache memory arrays into a million bit serial arithmetic logic units. This style of in-memory computing morphs memory arrays into massive vector computation units, providing parallelism several orders of magnitude higher than a contemporary GPU. Additionally, it reduces the amount of energy spent shuffling data between storage and compute units – a significant concern in Big Data applications.

Caches that compute can be a game changer for AI. They can add accelerator capabilities to general purpose processors, without taking up significant space on the chip with a dedicated accelerator. Accelerators, like Google’s Tensor Processing Unit, are hardware components designed to perform a single computation operation extremely well. They’re often integrated with more general computation units to speed up specific applications, but take up chip space for that one task. Das’s research has shown that caches capable of performing these operations instead can improve processor efficiency in Intel Xeon chips by 629x for convolutional neural networks (CNNs).

Das, an expert in computer architecture, joined the faculty at Michigan in 2015. Her research contributions have been recognized and supported by a number prestigious awards, including an NSF CAREER Award, the CRA Borg Early Career Award, and a Sloan Fellowship. She has been inducted into the Halls of Fame of two flagship conferences, ISCA and MICRO, and her thesis research, focused on application-aware on-chip interconnects, was recognized with an IEEE Top Picks award. She has served on over 30 technical program committees and several committees tasked with selecting award papers. She has published over 45 papers and holds seven patents.