Researchers boost processor performance by getting CPU and GPU to collaborate

Researchers boost processor performance by getting CPU and GPU to collaborateResearchers at North Carolina State University have developed a technique to take advantage of the "fused architecture" emerging on multicore CPUs that puts central processing units and graphics processing units on the same chip. The technology, called CPU-assisted general purpose computation on graphics processor units (CPU-assisted GPGPU) uses software compiled to leverage the architecture to allow the CPU and GPU to collaborate on computing tasks, boosting processor performance on average by more than 20 percent in simulations.

The approach, outlined in a paper by NC State Associate Professor of Electrical and Computer Engineering Dr. Huiyang Zhou, Ph.D. candidates Yi Yang and Ping Xiang, and AMD GPU Architect Mike Mantor, is designed for fused architecture chipsets with a shared L3 cache and shared off-chip memory for CPUs and GPUs. The approach developed by the team leverages the computing power of the GPU, while taking advantage of the CPU's more flexible data retrieval and better handling of complex tasks.

The current generation of hybrid CPU/GPU systems, including Intel's "Sandy Bridge" and AMD's "Llano" has helped create more energy-efficient systems and reduce manufacturing costs, Zhou said. "However, the CPU cores and GPU cores still work almost exclusively on separate functions. They rarely collaborate to execute any given program, so they aren’t as efficient as they could be. That’s the issue we’re trying to resolve.”

GPUs are obviously designed for handling graphics, but they are also very good at handling large numbers of parallel processes, particularly in applications where the same process needs to be applied to large amounts of data. Traditionally, one of the the biggest problems when using GPUs for general purpose computing has been that they don't handle complex, branchy, pointer-heavy code very well at all—which is the strength of CPUs. The long pipelines of most GPUs instead favor sequential, streaming reads, and applications where there's a high ratio of arithmetic operations applied to data relative to the amount of data that has to be moved to and from memory. Hybrid chips like Sandy Bridge have less main memory bandwidth than typical discrete GPUs (albeit with lower latency), so keeping the fast level 3 cache filled with data is essential if developers want to avoid starving the GPU of data.

CPU-assisted GPGPU uses the CPU's faster L3 cache pre-fetching to feed data to the GPU, cutting out performance drags that come with GPU code accessing memory. A program compiled for CPU-assisted GPGPU launches a "pre-execution" program at startup on the CPU to pre-fetch data to be processed by GPU code and load it into the level 3 cache onboard the chip. That allows process threads running in the GPU to hit the L3 cache directly, rather than fetching from memory, reducing latency and significantly boosting performance. In some cases, the performance of simulated applications improved by up to 113%, the researchers claimed.

Why simulated? AMD's current hybrid processor, the Llano, lacks a shared L3 cache, so it won't support the approach. And Intel's Sandy Bridge offers only limited GPGPU functionality. In a phone interview with Ars, Dr. Zhou explained that in theory the research could be applied to Intel's current Sandy Bridge architecture, which provides a shared last-level cache for CPUs and GPUs in its architecture. But he said that Sandy Bridge's GPU "isn't that powerful" and Intel's current software support "doesn't include support for OpenCL and other GPGPL stuff." However, he said, he expects that the hardware support for CPU-assisted GPGPU applications will be in upcoming generations of hybrid platforms from both Intel and AMD, and software support will follow. And, he added, "it’s already assumed that the GPU (in Intel's Ivy Bridge processors) will be much more powerful than Sandy Bridge."

Real World Technologies editor David Kanter said that he expects to see "a lot more work in this area, as engineers and researchers must improve performance significantly, while maintaining or reducing power consumption." But he noted that there wasn't information in the research about the power consumption impact of the technology. Zhou said that the research hasn't yielded any hard numbers on what the power consumption impact would be.

Zhou said that his team's research had been funded by grants from the National Science Foundation and AMD, and was just the latest collaboration with Mantor. But the research up until now has been fundamental scientific research, and he couldn't say how it might be commercialized by AMD or Intel.

Source: Ars Technica

Tags: CPUs, GPU, technologies

Add comment

Your name:
Sign in with:
Your comment:

Enter code:

E-mail (not required)
E-mail will not be disclosed to the third party

Last news

Galaxy Note10 really is built around a 6.7-inch display
You may still be able to download your content
Facebook, Messenger and Instagram are all going away
Minimize apps to a floating, always-on-top bubble
Japan Display has been providing LCDs for the iPhone XR, the only LCD model in Apple’s 2018 line-up
The 2001 operating system has reached its lowest share level
The entire TSMC 5nm design infrastructure is available now from TSMC
The smartphone uses a Snapdragon 660 processor running Android 9 Pie
The Samsung Galaxy A5 (2017) Review
The evolution of the successful smartphone, now with a waterproof body and USB Type-C
February 7, 2017 / 2
Samsung Galaxy TabPro S - a tablet with the Windows-keyboard
The first Windows-tablet with the 12-inch display Super AMOLED
June 7, 2016 /
Keyboards for iOS
Ten iOS keyboards review
July 18, 2015 /
Samsung E1200 Mobile Phone Review
A cheap phone with a good screen
March 8, 2015 / 4
Creative Sound Blaster Z sound card review
Good sound for those who are not satisfied with the onboard solution
September 25, 2014 / 2
Samsung Galaxy Gear: Smartwatch at High Price
The first smartwatch from Samsung - almost a smartphone with a small body
December 19, 2013 /

News Archive



Do you use microSD card with your phone?
or leave your own version in comments (16)