Deep learning opens up new possibilities

04 May 2021

Control Engineering Europe looks at some of the latest machine vision technology advances.

Carsten Traupe, head of product management at IDS Imaging Development Systems, believes that for industrial automation, both embedded vision and AI-based image processing solutions belong to the most influential developments today. “The combination of low price, integrated computing power and compact design makes embedded vision devices the most logical solution for many applications. The performance of embedded vision systems improves continuously, despite their compact form factor and it is now possible to realise applications that were unthinkable just a few years ago.”

Traupe says that, on the manufacturer side, the focus is currently also on the integration of interface standards to simplify commissioning and control to help drive customer acceptance. “It must not be forgotten that the central purpose of these systems is to replace a host PC. If we consider the best case, modern embedded vision systems are flexibly usable and freely programmable. They can execute neural networks ‘on-board’, hardware-accelerated, and can be seamlessly integrated into processes and plants through standardised interfaces, for example through industrial protocols such as REST and OPC UA.

“The greatest added value in these solutions for real-world applications comes from the combination of embedded vision, classic image processing routines and deep learning.”  The complete AI solution IDS NXT ocean, for example, features vision apps which can be used to flexibly control which task the embedded vision system solves – from barcode reading to object detection. Image processing takes place directly on the camera and can then be transmitted as a result, for example to a machine control system. 

Coordinated workflows and all the necessary tools are provided to allow stakeholders to develop their own applications and bring it into the field without complications. “With the IDS-hosted cloud-based AI training system, for example, users can train a neural network with their own images without any prior deep learning knowledge and without having to install any hardware or software,” said Traupe. “This is remarkable, because although automation is a defining topic in the market, only a few all-in-one packages for embedded vision with deep learning are available so far. In many cases, this is still project work, with customers building the systems themselves step by step. We have made it our task to remove these hurdles so that companies without dedicated AI specialists can also benefit from the current state of the art.”

New possibilities
Commenting on how deep learning is opening up new possibilities for previously impossible inspection applications, John Dunlop, founder and chief technology officer at Bytronic Vision Automation, said: “If you want to better understand a subject as complex as deep learning, simply hold your mobile phone. In that action, you are doing the two things that machines have never been able to master. Through learned experience, you know what is or isn’t a phone, and you can tell by touch whether the is damaged and where the edges are.

“In deep learning terms, these two processes – known as surface inspection and categorisation –have always been beyond the capabilities of a typical automated factory. ‘Reading things’ and ‘sorting things’ are tasks that have always been left to human operators.”

Recently, however, deep learning technology has started to move onto the factory floor bringing new and affordable possibilities for automating production lines.

“You can now teach machines to carry out difficult, subjective product inspections and two big technical developments are making this possible,” said Dunlop. “The first has been the move by computer manufacturers away from using traditional Central Processing Units (CPU) towards Graphics Processing Units (GPU). This means complex deep learning that once took hours can now be done in seconds.

“Second, is the launch of more accessible software. You no longer have to be a developer fluent in code to use deep learning programming tools – a factory engineer can now learn to use deep learning inspection cameras without having to use a PC.”

As a result, deep learning is now unlocking applications that were once considered just too complicated for anything other than human inspection. It is allowing us to replicate that ‘human’ learning process, but in a machine and it allows us to start apply two deep learning processes – surface inspection and categorisation – to production lines. 

It can look for damage, errors or abnormalities on a surface, such as inspection of paintwork, textiles, printed text or car body parts and it can carry out complex categorisation – spotting damage on car body panels, detecting upside down products, removing misshaped foods on fast-moving conveyors. Tasks where you might currently have a trained, human operator on the line.

“At the moment, deep learning requires training with a series of static images. In time, I am sure this will develop into ‘self-learning’ feedback, or learning through experience, with all of the new manufacturing possibilities that will come with that too,” concludes Dunlop.


Contact Details and Archive...

Related Articles...

Print this page | E-mail this page