Engineers to hack 50-year-old computing problem with new center

Cornell engineers are part of a national effort to reinvent computing by developing new solutions to the “von Neumann bottleneck,” a feature-turned-problem that is almost as old as the modern computer itself.

Most modern computers operate using a von Neumann architecture, named after computer scientist John von Neumann. He proposed in 1945 that programs and data should both reside in a computer’s memory, and that the central processing unit may access them as needed using a memory bus. Von Neumann’s paradigm allowed processor and memory technology to evolve largely independently at breakneck pace, the former emphasizing processing speed, and the latter favoring storage density. Soon enough, however, this created a fundamental bottleneck which has become steadily worse over the years, forcing computer architects to concoct a myriad of engineering tricks such as caches, prefetching or speculative execution.

“The faster processors got relative to memory, the more critical this problem of busing data around became,” says José Martínez, professor of electrical and computer engineering. “Today’s processors often find themselves twiddling their thumbs, waiting for data they’ve requested from memory so they can get something done.”

Zhiru Zhang, assistant professor of electrical and computer engineering, and Martínez are working to develop a radically new computer architecture through the new Center for Research on Intelligent Storage and Processing in Memory (CRISP), an eight-university endeavor led by the University of Virginia. The center is funded with a $27.5 million grant as part of the Joint University Microelectronics Program (JUMP). A $200-million, five-year national program, JUMP is managed by North Carolina-based Semiconductor Research Corporation, a consortium that includes engineers and scientists from technology companies, universities and government agencies.

The formation of CRISP comes at a time of increased interest in solving this so-called von Neumann bottleneck, as the growing use of “big data” presents new opportunities to leverage vast sets of digital information for business, health care, science, environmental protection and a wealth of other societal challenges. The center aims to develop a new type of computer architecture that considers processing and storage as one and the same mechanism, rather than two separate components. This can be achieved by building processing capabilities right inside memory storage, and by pairing processors with memory layers in “vertical silicon stacks,” according to Martínez.

“Memory is deeply hierarchical, and at each level there’s an opportunity for adding computing capabilities,” said Martínez, who adds that consideration must be given to data structure and usage patterns. “Organizing the computation around this deeply hierarchical system is a big challenge. I could be physically very close to some stored data, but if the probability of such data being relevant to me is low, that proximity most likely does nothing for me. ”

The center takes a vertical approach to the problem, spanning hardware, system and applications research themes, with Martínez serving as the center’s lead for the hardware theme. This vertical approach allows the center to tackle another critical challenge: create a programming framework that is intuitive enough for programmers to use productively and effectively. 

“We are essentially blurring the boundaries between compute and storage. This introduces a whole host of new challenges to hardware architecture design, as well as software programming. Our goal is to achieve transparent acceleration where the programmers do not have to reason about the low-level hardware details to optimize communication and data placement,” said Zhang.

Zhang and Martínez both conduct their research at Cornell’s Computer Systems Laboratory. As part of the project, they envision co-designing the architecture with new compiler and run-time systems that can automatically translate programs into machine code for their new architectures. “We cannot afford to determine the architecture or the run-time system before attacking the other one. We need to design both at the same time,” said Martínez.

Other Articles of Interest