Artificial Intelligence has been known for a long time but until recently it was not a true trend. And it was thanks to the fact that their processes reached mobile phones. First only as software and then with specific phone components for execution. It was when the artificial intelligence processors were born, the chips for neural processing. This is what we will explore in this article, what exactly they are and what they are for.
What is neural processing?
To talk about neural processing is to talk about algorithms, of code, that try to emulate the functioning of a human brain. Although usually encompasses everything within the concept of Artificial Intelligence, when we reduce it to the processes that are executed in this type of software it is more correct to talk about both machine learning and deep learning , being the first more general and the second more specific. Specifically, a subset of operations encompassed within the first.
Machine learning consists of “using algorithms to parse data, learn from them and then be able to make a prediction or suggestion about something,” while deep learning is “a subset within the Machine field. Learning, which preaches with the idea of learning from the example “.
All this can be summed up, losing nuances along the way, in which both processes seek an algorithm to solve problems on its own based on the data we provide them, having several ways of providing this data. Its use in mobiles tries to solve different questions for which these artificial intelligences have been designed, such as to process photographs, to save energy or simply, to make the phone more fluid learning from our use and the software of the phone itself.
So thanks to this neural processing we have commented, phones are now able to use the camera to recognize objects, scenes and even people, which is probably the most striking use of these algorithms. And for proper operation, manufacturers have offered support in the form of specific chips, the neural processing chips that are already in circulation in the market.
Hardware to support software
Going the same way to reach the same point, or a very similar point, designers and manufacturers of mobile processors began last year to offer hardware solutions for neural learning processes. As support for the software, specific chips began to be included within the various SoCs, and today almost all manufacturers have their own solution.
Almost all processors in the mobile world are based on the ARM architecture, which means that their lithographs or designs mark the future of the rest of the market, although each manufacturer is responsible for implementing or not implementing these improvements. So ARM put an important grain of sand when it developed DynamIQ, an internal structure for cores that would serve to enhance the execution and code of artificial intelligence.
Going beyond the classic structure of CPUs, GPUs and different additives for photography, screen support and company, DynamIQ proposed a new structure to execute processes of neural processing, machine learning and deep learning. But the rest of the manufacturers also did their part and soon the actual implementations of these specific chips arrived.
Apple included a chip called the Neural Engine, or neural engine, in its recent Apple A11 Bionic, Huawei did the same with its NPU, or Neural Processing Unit, in the Kirin 970 and the responses from both Qualcomm and Samsung soon arrived. The former, for example, added the Zeroth NPU or remodeled components, such as a redesigned Hexagon DSP to that effect. But what do these chips handle?
What does a neural processing chip do?
We can summarize it in that the chip of the neural processing is the one that sends in a processor that contains it, at least when we talk about the execution of specific code for machine learning or deep learning. This code requires a huge amount of calculations, hence the chip is responsible for performing part of them and allocating the rest based on the availability of each of the parts of the SoC.
A neural processing chip is responsible for putting all internal components of the processor to work to execute artificial intelligence code. The ‘parallelization’ of processes is achieved thanks to the fact that some parts of the code are executed in the GPU, the most complex in the GPU and the ordering and part of the load is carried by the chip itself, be it the NE, the NPU or the implementation you touch at all times.
During the execution of neural processing code, a chip designed for it tries to make everything run at the highest possible speed, but also in the most efficient way. The same goes for the rest of the code, because among the attributions of this type of chips we find other tasks like the ones we mentioned before. Camera control, battery management, interface.
Thanks to these chips, the information captured by the sensors of the cameras of a smartphone are processed faster and more efficiently, and depending on which phone we have in hand we will also obtain more advanced functions . As the reading of the depth, the selection of the most appropriate settings for each photograph or even the prediction of the movement of what lies ahead .
So to the question of whether artificial intelligence is something useful or just a concept to try to sell smoke, the answer is the first. An AI installed in our phone will make everything work better, and the benefit will always be for the user. Then it will depend, of course, that the AI itself is better or worse designed. But there comes the work of each manufacturer.
No, an artificial intelligence installed in our mobile will not make it aware of itself, we do not talk about plausible science fiction utopias in the medium or long term in other sectors. An AI will manage the resources of the entire phone and process the information more efficiently. In short, it