Apple unveiled its Vision Pro headset with a $3,499 price tag. The experimental tech is ushering in what Apple calls the era of spatial computing.
During its WWDC 2023 conference, Apple finally gave the world its first peak at its newly announced headset. Now tech journalists are getting their first hands-on experiences with the device. It’s a new category of tech that runs what Apple calls visionOS. Users go through a set up process, scanning their faces and ears to begin using the Vision Pro.
Navigating the new visionOS relies entirely on detecting eye movement, hands, and voice navigation. Tapping your fingers in space on an object selects it, while pinch-to-zoom does exactly what you’d expect. You can open multiple apps and arrange them in the space exactly as you’d expect, though the learning curve for people not familiar with virtual reality controls may be a bit steep.
“Just as the Mac introduced us to personal computing, and the iPhone introduced us to mobile computing, Apple Vision Pro introduces us to spatial computing,” Apple CEO Tim Cook said during the WWDC reveal. “Built upon decades of Apple innovation, Vision Pro is years ahead and unlike anything created before— with a revolutionary new input system and thousands of groundbreaking innovations. It unlocks incredible experiences for our users and exciting new opportunities for our developers.”
The headset features two ultra-high resolution displays that can transform any space into a personal movie theater “with a screen that feels 100 feet wide and an advanced spatial audio system.” Apple Immersive Video offers 180-degree resolution recordings with spatial audio. The headset also features something called EyeSight, which detects when someone is in the room with a person—letting the person wearing the headset see them.
The technology even works for people who need vision correction with ZEISS Optical Inserts to provide eye tracking accuracy. But what about the spatial audio experience?
Apple says it has developed an advanced spatial audio system that creates the feeling of sounds coming from the environment around the user to match the sound in the space. Two individually amplified drivers inside each audio pod deliver the personalized spatial audio experience based on the user’s own head and unique ear geometry, using an iPhone to create the ear scans.