Holograms to the Rescue

By KIM BELLARD

Google is getting much (deserved) publicity for its Project Starline, announced at last week’s I/O conference.  Project Starline is a new 3D video chat capability that promises to make your Zoom experience seem even more tedious.  That’s great, but I’m expecting much more from holograms – or even better technologies.  Fortunately, there are several such candidates.

For anyone who has been excited about advances in telehealth, you haven’t seen anything yet.

If you missed Google’s announcement, Project Starline was described thusly:

Imagine looking through a sort of magic window, and through that window, you see another person, life-size and in three dimensions. You can talk naturally, gesture and make eye contact.

Google says: “We believe this is where person-to-person communication technology can and should go,” because: “The effect is the feeling of a person sitting just across from you, like they are right there.” 

Sounds pretty cool.  The thing, though, is that you’re still looking at the images through a screen.  Google can call it a “magic window” if it wants, but there’s still a screen between you and what you’re seeing.

Not so with Optical Trap Displays (OTDs).  These were pioneered by the BYU holography research group three years ago, and, in their latest advance, they’ve created – what else? – floating lightsabers that emit actual beams:

Optical trap displays are not, strictly speaking, holograms.  They use a laser beam to trap a particle in the air and then push it around, leaving a luminated, floating path.  As the researchers describe it, it’s like “a 3D printer for light.”

The authors explain:

The particle moves through every point in the image several times a second, creating an image by persistence of vision.  The higher the resolution and the refresh rate of the system, the more convincing this effect can be made, where the user will not be able to perceive updates to the imagery displayed to them, and at sufficient resolution will have difficulty distinguishing display image points from real-world image points.

Lead researcher Dan Smalley notes:

Most 3D displays require you to look at a screen, but our technology allows us to create images floating in space — and they’re physical; not some mirage.  This technology can make it possible to create vibrant animated content that orbits around or crawls on or explodes out of every day physical objects.

Co-author Wesley Rogers adds: “We can play some fancy tricks with motion parallax and we can make the display look a lot bigger than it physically is.  This methodology would allow us to create the illusion of a much deeper display up to theoretically an infinite size display.”

Indeed, their paper in Nature speculates: “This result leads us to contemplate the possibility of immersive OTD environments that not only include real images capable of wrapping around physical objects (or the user themselves), but that also provide simulated virtual windows into expansive exterior spaces.”

I don’t know what all of that means, but it sounds awfully impressive.

The BYU researchers believe: “Unlike OTDs, holograms are extremely computationally intensive and their computational complexity scales rapidly with display size.  Neither is true for OTD displays.”  They need to meet Liang Shi, a Ph.D. student at MIT who is leading a team developing “tensor holography.” 

Before anyone with mathemaphobia freaks out about the “tensor,” let’s just say that it is a way to produce holograms almost instantly. 

The work was published in Nature last March.  The technique uses deep neural networks to generate 3D holograms in near real time. I’ll skip the technical details of how this all works, but you can watch their video:

Their approach doesn’t require supercomputers or long calculations, instead allowing neural networks to teach themselves how to generate the holograms. Amazingly, the “compact tensor network” requires less than 1 MB of memory.  The images can be calculated from a multi-camera setup or LiDAR sensor, which are becoming standard on smartphones.

“People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” Mr. Shi says.

Joel Kollin, a Microsoft researcher who was not involved in the research, told MIT News that the research “shows that true 3D holographic displays are practical with only moderate computational requirements.” 

All of the efforts are already thinking about healthcare.  Google is currently testing Project Starline in a few of its offices, but is betting big on its future  It has explicitly picked healthcare as one of the first industries it is working with, aiming for trial demos later this year.

The BYU researchers see medicine as a good use for OTDs, helping doctors plan complicated surgeries: “a high-resolution MRI with an optical-trap display could show, in three dimensions, the specific issues they are likely to encounter. Like a real-life game of Operation, surgical teams will be able to plan how to navigate delicate aspects of their upcoming procedures.”

The MIT researchers believe the approach offers much promise for VR, volumetric 3D printing, microscopy, visualization of medical data, and the design of surfaces with unique optical properties. 

If you don’t know what “volumetric 3D printing” is (and I didn’t), it’s been described as like an MRI in reverse: “the form of the object is projected to form the model instead of scanning the object.”  It could revolutionize 3D printing, and, for healthcare specifically, “Being able to 3D print from all spatial dimensions at the same time could be instrumental in producing complex organs…This would enable better and more functional vascularity and multi-cellular-material structures.”

As for “visualization of medical data,” for example, surgeons at The Ohio State University Wexner Medical Center are already using “mixed reality 3D holograms” to assist in shoulder surgery.  Holograms have also been used for cardiac, liver, and spine surgeries, among others, as well as in imaging.    

2020 was, in essence, a coming out party for video conferencing in general and for telehealth in particular.  The capabilities had been around, but it wasn’t until we were locked down and reluctant to be around others that we started to experience its possibilities.  Still, though, we should be thinking of it as version 1.0.

Versions 2.0 and beyond are going to be more realistic, more interactive, and less constrained by screens.  They might be holograms, tensor holograms, optical trap displays, or other technologies I’ve not aware of.  I just hope it doesn’t take another pandemic for us to realize their potential.  

Kim is a former emarketing exec at a major Blues plan, editor of the late & lamented Tincture.io, and now regular THCB contributor.