Holographic GPU renders at near real-time speeds

Researchers develop specialized hardware to render holographic projections at near real-time speeds. The framerate leaves a bit to be desired, however, so don't throw your 3D glasses away just yet.

If the resurgence of 3D glasses at local cinemas are any indication, we all want a bit more, ahem, depth to our cinematic experience. Unfortunately, the stylish glasses don't exactly lend themselves to an immersive experience. What would be really cool would be animated holograms. While holograms aren't the easiest things in the world to make, it is possible to take a 3D computer model and compute the data necessary to generate a hologram that can be used to project a 3D image from a screen. Given that animation is largely computer generated now anyway, where are my holographic animated movies?

One of the problems turns out to be efficient rendering. A recent paper in Optics Express, although it presents a huge speed-up in holographic rendering, demonstrates just how difficult the problem is. The basic animation is now well within the reach of modern rendering farms—unfortunately, that doesn't leave any power left to put into important things like shading, lighting, and shadows (much less character and plot).

As usual, we need to take a step back—in this case, to look at how a real hologram is made. Take the light from a laser, divide it into two beams. Shine one beam onto the object you want a hologram of, and from there let it hit a slide of photographic paper. The other beam goes directly onto the photographic paper. As a result, the photographic paper records the interference between the two beams rather than an image of the object.

The reason that this is useful is that the light from the laser is spatially and temporally coherent, and only one interference pattern is recorded, rather than the average of many millions of such patterns.

(For those who care about the details: if we take a slice through the laser beam, the electric field at two locations in that slice are fixed in their relationship with each other. Likewise if we know the electric field at one point in time and space, then temporal coherence means that we know how it will vary in time from that point in space. This sort of coherence is why we only get one interference pattern.)

To project the image, the process is reversed. Half the laser light is reflected off the interference pattern, and then recombined with the a beam that has not been modified. The combined beam is then projected towards the viewer using a lens system. Viewers perceive the 3D object rather than the interference pattern.

So, computing a hologram sounds pretty simple: just recreate the interference pattern. But there is a gotcha. For instance, if you cut a photograph in two, you get two partial images. Cutting a hologram in two, however, results in two complete images of lesser quality—every pixel encodes information about the entire image. And therein lies the rub: to display a computed hologram, the intensity of each pixel must be calculated from the entire interference pattern, not just the local contribution to the object.

Surely that can't be that difficult? Well, researchers in Japan have created a graphics card, called the HORN-6, that can do this for you. It consists of four Xilinx field programmable gate arrays (FPGA), each of which has about 7 million gates and a bit of memory (less than 1MB). Each FPGA is connected to 256MB of DDR RAM, while a fifth, smaller FPGA is used to manage the PCI bus.

These FPGAs divide the area of a 1,920 x 1,080 LCD and calculate the intensity of each pixel using a ray-tracing algorithm that also tracks the phase of the light—the phase allows the interference pattern to be calculated. In a nice bit of engineering, as the block size that each FPGA can process (e.g., the local storage limit) is completed in just under the time it takes to fetch the next block from memory. This allows the researchers to keep the FPGA load pretty much constant by prefetching data.

When the rendering is completed, the resulting interference pattern can be displayed on an LCD. This pattern can then be projected using a laser to render a hologram that is 1m along a dimension (the paper doesn't say which, so I suspect the diagonal) and has a five degree viewing angle.

Of course, the gamers amongst us want the framerate. Well, this will blow your socks off: a peak performance of 0.08fps (full resolution). But, if you dig deep and shell out for the fully kitted out model, performance goes up. You do this by sticking as many cards in your computer as you have PCI slots. The researchers demonstrated a four-board system that had a whopping 0.25fps. A distributed system of four PCs, each with four boards, clocks in at an entire 1.0fps. Awesome stuff.

It helps to put this in perspective by comparing rendering times with other hardware. Unfortunately, the researchers didn't do a very good job here. I suspect they couldn't use any standard GPUs because these are limited in the ways that they can be programmed (but I am no expert, so this could be incorrect). So, instead, they used CPU rendering times.

Even then, they could have done better, as they compared these numbers to an Intel P4 clocked at 3.4GHz, which isn't exactly cutting-edge hardware. Anyway, a single frame can be rendered in 1 hour, 16 minutes, and 14 seconds by a P4, so it probably doesn't matter that much—their card is still faster than current hardware.

Source: ars technica

Tags: holography

Comments
Add comment

Your name:
Sign in with:
or
Your comment:


Enter code:

E-mail (not required)
E-mail will not be disclosed to the third party


Last news

 
Galaxy Note10 really is built around a 6.7-inch display
 
You may still be able to download your content
 
Facebook, Messenger and Instagram are all going away
 
Minimize apps to a floating, always-on-top bubble
 
Japan Display has been providing LCDs for the iPhone XR, the only LCD model in Apple’s 2018 line-up
 
The 2001 operating system has reached its lowest share level
 
The entire TSMC 5nm design infrastructure is available now from TSMC
 
The smartphone uses a Snapdragon 660 processor running Android 9 Pie
The Samsung Galaxy A5 (2017) Review
The evolution of the successful smartphone, now with a waterproof body and USB Type-C
February 7, 2017 / 2
Samsung Galaxy TabPro S - a tablet with the Windows-keyboard
The first Windows-tablet with the 12-inch display Super AMOLED
June 7, 2016 /
Keyboards for iOS
Ten iOS keyboards review
July 18, 2015 /
Samsung E1200 Mobile Phone Review
A cheap phone with a good screen
March 8, 2015 / 4
Creative Sound Blaster Z sound card review
Good sound for those who are not satisfied with the onboard solution
September 25, 2014 / 2
Samsung Galaxy Gear: Smartwatch at High Price
The first smartwatch from Samsung - almost a smartphone with a small body
December 19, 2013 /
 
 

News Archive

 
 
SuMoTuWeThFrSa
   1234
567891011
12131415161718
19202122232425
262728293031 




Poll

Do you use microSD card with your phone?
or leave your own version in comments (16)