Eyesynth’s NIIRA Transforms Sound Utilizing Visual Data From RealSense Depth Sensors
The challenge
In order to receive federal funding, schools need accurate accounting of the number of meals served and validation that those meals are going to the right students. Traditional payment methods, such as card readers and PIN pads, can significantly slow down the meal checkout process.
The solution
RealSense™ ID Solution F455, embedded in the TabletKiosk TKANNA point-of-sale system, uses an active stereo depth sensor in conjunction with AI algorithms to verify facial biometrics quickly, accurately, and securely to identify students and authorize payments, while maintaining privacy.
The results
Boosting participation in federal nutrition programs improves student performance and boosts government funding for K-12 schools.
NIIRA is a groundbreaking wearable device that transforms spatial data into sound, giving blind and visually impaired users a new way to perceive their surroundings. Powered by RealSense depth-sensing technology and custom auditory UX, NIIRA delivers real-time 3D awareness through bone-conducted sound, enabling intuitive navigation and independence. A new era in assistive tech has emerged — combining neuroscience, AI and human-centered design to create a true “sonic vision” system
Challenge: Turning Vision Into Sound
Developing assistive technology that can convey spatial awareness to blind users is a complex engineering challenge: How do you accurately and non-visually represent a three-dimensional, dynamic world in real time — naturally, safely and intuitively?
Traditional tools like canes and guide dogs are essential for the visually impaired but limited. They don’t detect overhead obstacles like branches or awnings, nor do they provide sufficient range or context.
Antonio Quesada founded Eyesynth to bridge this gap using sensory substitution: translating vision into audio in a way the brain can intuitively learn and use.
The requirements for this technology were ambitious:
- Real-time 3D perception with extended range and accuracy
- Compact, wearable hardware with ultra-low latency
- Safe, intuitive auditory output that doesn’t interfere with speech or ambient sound
- Expandable software to support AI-based scene recognition and interaction
The challenge was as human as it was technical, requiring a decade of research, design and real-world testing.
The birth of Eyesynth
Eyesynth emerged from an urgent need: the birth of Quesada’s friend’s son, who was born prematurely and had many health challenges, including blindness. The circumstance prompted much discussion among the two friends about the lack of meaningful technology for the visually impaired. “There was a cane and a guide dog. That was it,” Quesada said. “We had to do better.”
Long fascinated by synesthesia — a phenomenon where senses cross over (like hearing colors or seeing sounds) — Quesada, who experiences it himself, wondered: What if we reversed the process and converted geometry into sound?
Testing the theory, they started running experiments with the child and were stunned. “He performed brilliantly in just one evening,” Quesada recalled.
The science made sense. The brain’s audio and visual learning centers are closely connected. Mixing the two was natural for the brain — even intuitive. Interestingly, while every baby is born with synesthesia, only 20% of adults have it.
What followed was a mission spanning a decade, one that would merge neuroscience, computer vision and sound design to open the world to those who navigate without sight.
The goal was to do more than build a tool. It was to give blind individuals a new sense, a way to experience the world in real time, with autonomy and confidence.
Solution: A RealSense-Powered Wearable Perception System
After years of attempting to build its own vision technology and not achieving the desired results, Eyesynth partnered with RealSense to power its wearable solution.
“We always had the concept of using some type of computer vision for smart glasses,” said Quesada. “When we discovered RealSense, we found precision and low-energy consumption, which is essential for a portable device. Depth sensing is very complex, and it’s full of false positives. RealSense depth technology works flawlessly.”
Eyesynth currently utilizes the RealSense D415, which features a tightly focused field of view and delivers high depth resolution for applications requiring precise measurements. The camera provides real-time depth data up to five meters, enabling users to detect obstacles well in advance.
Efficient, real-time data processing — up to 36.6 million pixels per second — was critical for Eyesynth’s breakthrough. This was made possible by RealSense’s integration of an ASIC (application-specific integrated circuit). “RealSense’s use of an ASIC offloads a significant amount of computational work from our system,” said Quesada. “It has solved so many challenges for us.”
RealSense ASICs are purpose-built to manage the complex calculations required for stereo depth vision. Their dedicated design enables fast, efficient processing of depth data at high resolutions and frame rates. And, by handling depth processing onboard, the ASICs minimize strain on the host CPU or GPU, allowing those resources to be allocated to other tasks and resulting s in lower overall power consumption — a critical factor in wearable technology.
A new era in assistive technology for the blind
After years of iteration and testing, Eyesynth unveiled NIIRA (Non-Invasive Image Resynthesis into Audio) in May 2025 — a groundbreaking wearable sensory platform that converts spatial data into sound, offering a new level of independence and environmental awareness for people who are blind or visually impaired.
NIIRA complements the use of canes and guide dogs, providing enhanced navigation.
Lightweight, ergonomic glasses are embedded with the RealSense D415 camera attached to a pocket-sized processing unit. The real-time data captured by the 3D depth camera is transmitted to a custom audio synthesizer, which converts each depth pixel (within 20 milliseconds) into a microsound.
The sound, similar to the murmur of the sea, changes its characteristics according to the recorded 3D maps. Volume indicates distance. Pitch represents height. And stereo panning maps the object’s horizontal position. The result is a panoramic, evolving sonic landscape — a kind of “language of sound” users can learn in days and master over time.
An integrated Ortofon A/S bone conduction audio system transmits sound through the skull, keeping ears free for ambient sound and conversation. Users can talk, hear their surroundings and navigate simultaneously.
The device’s architecture is built like a digital orchestra: CPUs, GPUs, ASICs, DSPs, audio codecs and neural networks, all running under a customized Linux system built for stability and speed.
The full Eyesynth system integrates:
- RealSense D415 camera module embedded in the glasses frame
- On-device SLAM and obstacle mapping processed locally on a portable computing unit
- Sonic feedback system that translates 3D data into spatialized audio, using bone-conduction audio so users can retain full situational awareness
- AI inference engine (introduced in NIIRA OS 1.5) that interprets general scenes, reads text and identifies clothing via cloud-based processing
- Bluetooth audio allows for phone streaming directly through the headset. The system is portable, energy-efficient and endlessly expandable.
The latest version, NIIRA OS 1.7, further advances usability with scene recognition, AI-powered scene analysis for enhanced image description and voice interaction. Users can now ask what’s around them, what a text says or even what someone is wearing. Perception distances and speech speed are customizable. There are left-right balance controls and security, performance and stability enhancements.
It also features redesigned tutorials for instant onboarding.
“This is more than assistive tech,” said Quesada. “It’s a new sense, a new kind of freedom.”
It feels intuitive. It doesn’t overwhelm. And it adapts.
By combining depth-sensing hardware, auditory UX design and AI, Eyesynth has created a wearable platform that redefines what assistive vision can be.
“After years of learning, testing and refining, we finally brought our vision to life,” said Quesada. “RealSense gave us the depth perception backbone, while AI added intelligence and personalization, making the experience more powerful and human.”
Eyesynth continues to evolve its designs, features and applications, and is actively testing newer RealSense camera models to further enhance its technology.
Results: The World Opens Up, Transforming Users Lives
NIIRA is now in production in Spain, with users reporting transformative improvements in mobility, confidence and daily independence. The results:
- Enhanced navigation and security: Users can detect obstacles — branches, poles, signage — up to 5 meters away, including aerial obstacles — information unavailable with the use of canes and guide dogs
- Infrastructure-free: Works anywhere; no GPS, beacons or calibration needed
- Fast adaptation: Users adapt quickly to audio cues thanks to the intuitive spatialization design
- Personalized user experience: OS 1.7 allows users to tailor audio, language and interaction preferences
- Continuous growth: AI updates expand capabilities — from identifying objects to describing environments
The impact is emotional as well as functional.
One beta tester, blind from birth, now identifies hairstyles by the sound shape. Another can finally play hide-and-seek with his daughter.
“That simple game was impossible before,” said Quesada. “Now it’s possible.”
NIIRA’s use can be adapted for those with partial vision, night blindness and tunnel vision. Using Eysesynth core technology, new AI models are being developed to help users safely cross complex intersections, aligned with the safest path.
Now available in Spain, Eyesynth will rollout NIIRA to the rest of Europe and then to the U.S.
What began as a response to one child’s need has become a pioneering movement in humanistic technology — technology that augments human potential and restores agency.
“Blindness isn’t a limitation of the person,” said Quesada. “It’s a limitation of how society supports them. In many countries, to be blind is to be less than a person. We reject that.”
With RealSense at its core, Eyesynth has created a new way of seeing, one not through the eyes, but through sound, memory and the mind.
The heart
of the solution
Eyesynth currently utilizes the RealSense D415, which features a tightly focused field of view and delivers high depth resolution for applications requiring precise measurements. The camera provides real-time depth data up to five meters, enabling users to detect obstacles well in advance.
“After years of learning, testing and refining, we finally brought our vision to life. RealSense gave us the depth perception backbone, while AI added intelligence and personalization, making the experience more powerful and human.”
— Antonio Quesada,
Founder, Eyesynth