When the India-based pharmaceutical giant Cipla considered ways to improve their manufacturing process, a major problem that caught their attention was that the machine vision cameras they used to quality-check finished products couldn’t quite identify transparent capsules. They also found that the solution they used then had difficulty in identifying dusty tablets. Such things could prove to be critical in ensuring operational efficiency, quality control and deciding production costs.
This is where Spookfish Innovations that develops machine vision solutions for manufacturing units in the pharmaceutical sector came into the scene. Cipla was clear on what they wanted. Solve these specific pill-identification problems of theirs. Spookfish, with its computer vision and machine learning algorithms, was able to do exactly that, saving a significant headache for their customers.
This is one of the many examples of how machine vision is transforming the manufacturing industry. The market is huge, as Anupriya Balikai, MD of Spookfish that has offices in Bristol and Bangalore explains.
“There is tremendous potential to bridge the in the manufacturing sector for machine vision,” Balikai said. “Just to give you an example, say you have a new product that is developed. Your machines would need a change in settings to inspect this new product and in the past, you would need an intelligent operator to change these settings. With machine vision and machine learning, you no longer need an operator, automatic learning algorithms would suggest what to change and how to change. ”
How it works
Technically, cameras by themselves just capture images, explained Rick Brookshire, Director of Product Development at Epson America. The so-called “smart cameras” have processors in them for vision processing. Vision systems and AI become significant when deciding what can be done with the captured visuals.
“For example, when training parts for recognition, AI can be used to look at hundreds of parts to define a more accepting model of a good part,” Brookshire said. “At Epson, we use Epson Vision Guide in combination with our IntelliFlex parts feeding system to auto-tune the feeder as well as determine optimal part quantities in the feeder system to maximize throughput. Other examples are where deep learning algorithms are used to help find defects.”
Elaborating on this point further, Shweta Kabadi, Senior Director and Business Unit Manager of Vision SW and Accessories at Cognex, listed the major role machine vision plays in the industrial vertical.
“AI-enabled cameras are used to perform four primary roles in factory automation: guiding, identifying, gauging and inspecting products,” Kabadi said. “Examples of guiding applications could include aligning a screen on a smartphone or guiding a robot to put a windshield in a car. Examples of identifying applications could include reading bar codes behind shrink wrap on a pallet, identifying laser-etched codes on metal pots or detecting components against noisy backgrounds with confusing patterns and glare.”
Actions such as measuring the width and depth of a brake pad as it moves on a conveyor belt are instances of machine vision being used in gauging applications. Identifying cosmetic defects, missing pieces and irregularities on finished products or components are examples of the technology being used in inspection purposes. This could include inspecting for potentially hazardous deformations on lithium-ion batteries as well.
Benefits of machine vision in factories
AI-enabled cameras allow manufacturers to perform critical functions without making contact with the product or slowing down their lines. They can inspect hundreds, or even thousands, of parts per minute, far exceeding the inspection capabilities of humans. They can also inspect object details that are too small to be seen by the human eye.
Source: Prasanth Aby Thomas, Consultant Editor
The first step a machine vision system will take to understand images collected by cameras is to adjust these images through processes such as sharpening, cutting or zooming. This processing provides meaningful information for computers to read.
As humans, we have a set of eyes capturing images, which then are sent to the brain for image identification. For machines, cameras and other visual sensors perform the function of the eyes, with software, artificial intelligence, FPGA (Field Programmable Gate Arrays) chips, CPUs and GPUs filling in for the brain.
“Image processing can be seen as the first step in analyzing video data, before it is fed to the system’s computer vision algorithms,” said Jerome Gigot, senior director of marketing at Ambarella.
Processing software can sharpen an image to improve readability, change the exposure for a clearer shot, or to zoom in and crop certain information, such as a barcode or address located on a package.
“The type of data that will be analyzed heavily depends on the manufacturing function that needs to be performed,” said Gigot.
Industrial objects, for instance, can be inspected by size, shape, color, and texture. These same variables can be also used to recognize agricultural or biological objects.
The second step is to have an algorithm that first distinguishes between the many different pieces of an image, then identifies the edges and models its subcomponents.
In manufacturing, computer vision isn’t limited to a single niche purpose. Some decode barcodes, while others inspect for defects. The latter is powered by neural networks that can compare how a piece of equipment looks versus how it is supposed to look. When the algorithm finds an anomaly, it flags the issue for the user. Other possibilities include monitoring, predictive maintenance, safety inspection and inventory management.
Gigot offers the example of food processing. At a food processing plant, a neural network detects and instructs the system to remove bad apples in real time as they speed through the scanner and before they shipped out to stores.
Seeing beyond vision with predictive capacity
Lian Jye Su, Principal Analyst, ABI Research
“In addition to cameras, machine learning-based machine vision can also
incorporate data collected from various sensors, including LiDAR, radar, ultrasound, and magnetic field sensors. The rich set of data will provide further insight into other aspects of production processes,” said Lian Jye Su, Principal Analyst at ABI Research.
Conventional machine vision only detects product defects and quality issues predefined by humans. With the help of machine learning algorithms, machine vision can pick up unexpected product abnormalities or defects, providing flexibility and valuable insights for manufacturers.
Machine vision-powered predictive maintenance utilizes machine learning and other connected devices to monitor data and components in order to taking corrective actions before machinery breaks down. It creates a zero-downtime situation for manufacturers, creating cost savings.
Another use of machine learning-equipped machine vision systems is for monitoring worker safety. Devices can track people and predict the movement of equipment, helping to prevent dangerous interactions between people and machines.
Source: Elvina Yang, Date: 2019/06/20