Currently, the lab version’s field of view is meager — only 11.7 degrees in the lab, far smaller than the Magic Leap 2 or even the Microsoft HoloLens.
But Stanford University’s Computational Imaging Laboratory has a full page of visual aids that suggest it might be able to do something special: a stack of thinner holographic components that can fit virtually into standard eyeglass frames and be trained to project realistic Full color, moving 3D images appearing at different depths.
Like other AR glasses, they use waveguides, which are components that guide light through the glasses and into the wearer’s eyes. But the researchers say they have developed a unique “nanophotonic metasurface waveguide” that can “eliminate the need for bulky collimating optics” and a “learned physical waveguide model” that can use artificial intelligence algorithms Significantly improve image quality. The study says the models are “automatically calibrated using camera feedback.”
While Stanford’s technology is currently just a prototype, with a working model seemingly attached to a bench and 3D-printed frame, the researchers are looking to disrupt the current spatial computing market, which also includes bulky pass-through hybrids like Apple’s Vision Pro Reality Headset, Meta’s Quest 3, and more.
Postdoctoral researcher Gun-Yeal Lee helped write the paper published in naturesaying that no other AR system can match it in terms of functionality and compactness.