Science & technology | Seeing the light
Robots with human-inspired eyes have better vision
Their reaction times can even surpass their makers’
February 12th 2026
A utonomous vehicles face many hazards as they set out on the road. Cyclists swerve in and out of traffic, distracted pedestrians amble into the road, human drivers change lanes without indicating. Accurate vision and quick reflexes are required. Until now, even the best robots struggled to make sense of such complex environments as quickly as humans. That may be about to change. In a study published this week in Nature Communications, researchers took inspiration from the human eye to develop a new artificial-vision system that is four times faster than the current state of the art.
Many robots are equipped with cameras to allow them to see the world. These digital eyes record sequences of still images which must somehow be interpreted as motion. A popular approach is optical flow. As a robot moves through its environment, optical-flow algorithms convert the shifting patterns of brightness it sees into information about its own movement and that of the objects around it. These algorithms allow robots to safely navigate busy streets, track the movement of table-tennis balls and even perform precision surgery.
Optical-flow methods are, however, computationally intensive. This is in part because every pixel in each frame must be processed. Even with state-of-the-art technology, distinguishing different objects in a single frame can take over 0.6 seconds. This can be costly. For an autonomous vehicle driving at motorway speeds, every half-second delay leads to around 12 metres of travel with outdated information. If artificial systems are to safely navigate homes, roads and operating theatres, their eyesight will need an upgrade.
Shuo Gao, a roboticist at Beihang University in China, wondered if biology might have the answer. Human eyes tame the complexity of the world by focusing attention only where it is needed. Central to this process is a region of the brain known as the lateral geniculate nucleus (LGN). The LGN acts as a relay station in the visual pathway, receiving information from the retina—where visual stimuli are converted into electrical signals—and passing it on to the brain’s visual cortex, where those signals are processed. But the LGN also plays an important filtering role, indicating to the visual cortex where processing power should be prioritised. Because the LGN’s filter is sensitive to changes in both time and space, it allows the brain to efficiently identify and track rapid movement, whether from a changing traffic light or a pedestrian crossing the street.
Dr Gao and his team aimed to introduce an LGN-like layer into artificial vision systems to guide the attention of optical flow algorithms. Doing so with traditional computer chips, in which the circuits that process information are kept separate from those that store data, would not have given them the speed-up they needed. Instead, the researchers turned to so-called neuromorphic hardware, which mimics the human brain by having the processing and storage functions integrated into the same bit of circuitry.
The researchers developed a novel piece of neuromorphic kit to imitate the LGN. Part of the device’s circuitry was designed to track changes in light intensity over time. This allows the device to build up a picture of where motion is occurring within a given environment and prioritise regions for optical-flow analysis.
The researchers tested the new setup in a variety of contexts—including autonomous driving and robotic-arm operation—to see how it performed. The scientists found that their system operated at approximately four times the speed of existing optical-flow methods while maintaining or improving accuracy. Performance increases were particularly notable for autonomous driving, where the accuracy doubled. The system surpassed human-level speeds in most cases.
The system is not without limitations. For one thing, the neuromorphic hardware must still feed information back to conventional algorithms; as good as it gets at prioritising images, it can never overcome those algorithms’ shortcomings. Indeed, the researchers observed that accuracy decreased for scenes with complex, dense motion—a familiar hurdle for optical flow.
The researchers hope that their new system will increase the variety and complexity of scenarios in which robotics can be deployed. That includes off the road and outside the factory. Interactions between humans and life-like robots may soon occur in millions of homes, an environment where the rapid detection and interpretation of subtle visual cues will be essential. Dr Gao’s work may help human and machine see eye to eye. ■
Curious about the world? To enjoy our mind-expanding science coverage, sign up to Simply Science, our weekly subscriber-only newsletter.