One of the aspirations of science and technology is to develop systems with the capabilities of nature’s most complex organism: the 37 trillion cells in the human body. Overall, it is incomprehensible, but partial progress can be made. he Institute of Microelectronics In the Andalusian capital (EMSE), from the Superior Council for Scientific Research (CSIC) and the University of Seville, they focused on the system that makes the vision possible. Traditional cameras take an image that is repeated…
Subscribe to continue reading
Read without limits
One of the aspirations of science and technology is to develop systems with the capabilities of nature’s most complex organism: the 37 trillion cells in the human body. Overall, it is incomprehensible, but partial progress can be made. he Institute of Microelectronics In the Andalusian capital (EMSE), from the Superior Council for Scientific Research (CSIC) and the University of Seville, they focused on the system that makes the vision possible. Conventional cameras capture an image that is repeated between 30 and 100,000 times per second to form a sequence. But the eye and connections with the brain allow us to move forward and we can focus and perceive minimal changes that allow us to adapt to interpret the environment and act accordingly without having to store all the information. It’s a capability that Imse already applies to its dynamic vision sensors (DVS) for event cameras that have been adopted by companies like Samsung and Sony.
Traditional cameras are more like realistic drawing than vision. They take the frame image and reproduce it. The main advances in resolution were: incorporating more pixels to gain clarity and avoid potential processing defects. “They can provide a huge amount of data that requires a central office and a lot of wires to transmit it. But someone has to do the processing,” explains Bernabé Linares, a research professor at Emsee University.
“The biological retina does not capture images. All information passes through the optic nerve and is processed by the brain. In a conventional camera, each pixel is independent and, at most, designed to interact with its neighbors to adjust brightness. But the digital image at the exit of the tunnel can be white,” the researcher adds. Or completely black, while, except in extremely extreme conditions, we can see what is inside and outside.” This ability is essential, for example, for the development of autonomous vehicles.
This property of human vision is known as the fovea, a mechanism that allows for maximizing resolution in the area where vision is focused while maintaining low resolution in areas of peripheral vision. In this way, the amount of information generated by the retina is reduced, but the decision-making ability of visual recognition is preserved.
he Nervous systems group Imse is looking for an electronic eye with these and other capabilities inspired by biology, a sensor that would allow results to be obtained at high speed, without huge energy consumption and reducing the amount of data needed for efficient processing. With these introductions, the event camera was developed, which does not operate with frames, but rather with continuous flows of electrical pulses (events or… nails) are produced by each light sensor (or pixel) independently when it detects a sufficient change in light.
“In these cameras, starting information is provided by the outlines of objects,” Linares points out. But they are not images: they are a dynamic flow of pixels (events) that change and the processing stage mimics the brain, which also creates a hierarchy of layers.
Although the seed of the new approach to images appeared at the California Institute of Technology (Caltech) in the 1990s, its use to mimic the human eye began 20 years ago in Switzerland with a European project coordinated by Emsi called caviar. Hence, patents began, and companies emerged from research and investors adopting developments made by companies such as Samsung and Sony to develop image processors. “The goal,” explains the Emsey researcher, “is to develop the click [la región de la retina especializada en la visión fina de los detalles] “Electronics”. This device allows, without generating a lot of information, to identify the area of interest and this is what is processed with high accuracy.
This device is necessary to highlight data relevant to autonomous driving, simplify processing and reduce resource consumption. “If the camera sees a sign, pedestrian or other vehicle, it doesn’t have to analyze the entire image, just the new element,” Linares explains.
But it also has unusual implications in sensors for any activity, such as surveillance and image tracking, by activating only when a relevant change occurs, or in image diagnosis, by only indicating changed areas, or in drone navigation. a investigation “This frameless sensor changes light intensity depending on the pixels and has a high dynamic range and microsecond temporal resolution,” says Bodo Rökör, of the Dutch Radboud University, led by Bodo Rökör, of the Dutch Radboud University. The gesture recognition trainer achieves up to 90% accuracy with DVS.
Teresa Serrano, scientist and director of Imse, points out how neuroscience could use treatments that interact with nervous systems to serve patients with epilepsy or Parkinson’s disease.
The current line of research is compiled into the project Intelligent artificial intelligencewhich aims to leverage the latest advances in microelectronics and integrated circuit technology to create neural sensing and processing with greater security and privacy and at lower cost, power consumption (up to 100 times lower) and latency (50 times faster in response time).Answer).
One company that emerged from the research group is Chronocam, now called Prophesee. “Basically what we are developing is a new approach to information detection, very different from traditional cameras that have been around for many years,” he says. Luca Ferri, CEO of Prophesee.
“Our sensors produce very low amounts of data. So, it allows you to have a low-power, cost-effective system, simply because it can generate some event data that the processor can react to easily and locally.” Instead of feeding it tons of frames that increase Loaded and hindering its ability to process data in real time, the event camera allows you to do that in real time in the scene,” Ferry explains.
“Beeraholic. Friend of animals everywhere. Evil web scholar. Zombie maven.”