A Complete Breakdown of Automatic Vision Systems
A Complete Breakdown of Automatic Vision Systems
Automatic Vision Systems, also known as machine vision, are complex hardware and software solutions that work in unison to enable computers to see. These systems have been in development in industrial settings for decades now but have recently been propelled far ahead thanks to advancements in artificial intelligence.
With AI, computer systems can now gain more information from simple images and video than ever before, such as properly inspecting a piece of manufactured equipment to assure its quality and even to monitor a warehouse and ensure employees are working safely.
In this article, we’ll offer insight and knowledge into what different pieces of equipment make up these complex systems, how the AI and imagine process works, and what different arrangements there are for it.
What Components Make Up An Automatic Vision System?
An automatic vision system is primarily composed of two separate things: hardware and software. These work in tandem to gather images and then evaluate them based on custom metrics, depending on your use for them.
While these can be changed and configured based on your purpose, certain elements are intrinsic to automatic vision.
Let’s go over some of them with assistance from Thomas Net, who does an excellent job breaking them down.
The Lighting System
Automatic vision systems are designed to capture a space with as much fidelity as is available, but that fidelity isn’t worth very much if space itself doesn’t look properly lit. The objective of any good lighting for automatic vision is two-fold: to contrast the subject against the backdrop and to fully illuminate an item without any presence of shadow.
Lighting systems come in many shapes and can be easily configured using multiple or implementing accessories like diffusers to reduce harsh glare.
This also requires awareness of where something is being seen by the computer, as sometimes other light sources can come into the scene and make things difficult. Here our options are to simply block these out with a physical barrier of some kind or to use strobe lights, which will drown out any shadows cast.
The Optical System/Lens
The optical system, or lens, is another crucial component to an automatic vision setup, as it determines the range, focus, and fidelity a photo will be taken. There are even more factors to consider in an industrial setting when looking for the best automatic vision lens, such as if it can withstand conditions in your warehouse.
Traditional camera lenses can struggle in environments as they become misaligned through the shocks caused by machinery or are exposed to particles like dust or liquid that can damage the equipment itself.
In an industrial setting, these lenses are ruggedized, designed to zoom to a fixed point, so they do not have moving parts, and are fully sealed instead. This complex process involves precisely applying adhesives, utilizing spacers to center lenses, and making it all fit within a single metal body.
Image sensors work to capture the light through the lens and translate that into a computer image. This utilizes CCD or CMOS technology, which detects where the light is and is not. Here we measure the fidelity of these important tools by their resolution, which is able to provide images with more dense detail as it increases.
The more precise the detail captured by the sensor is, the more accurate the computer is able to be as it examines the image.
The Vision Processing System
The final element of the automatic vision puzzle is the vision processing system, or the computer and artificial intelligence that will receive the image from the sensor. Once it has that image, it will analyze the image to locate key features that it needs to measure, such as manufactured parts and specific sections of said parts that are sometimes faulty. This is a complicated process.
How Does Automatic Vision Imaging Work?
The visioning process for automatic vision can work in several ways, although they are largely similar. These cameras are set to capture constantly. But to analyze a product, the computer isolates specific frames or grabs them.
This can be done with visualization tools such as CoaXPress or Camera Link or could be done with embedded processing. As AI Journal explains, it is also possible to directly connect a digital camera through FireWire, USB, or Gigabit Ethernet, although this methodology is used in research labs rather than in industrial settings. For our Life Science and Medical assemblies, we’ve enabled the use of USB 3.0 and 10GigE (10x faster than Gigabit Ethernet) to offer enhanced frame capture quality when used in research applications.
The way the system evaluates the object again depends on the situation or circumstance, as while some might use conventional 2D analysis, manufacturing environments require a more specific understanding of the imagery.
This includes tactics like imaging with X-Rays, 3D, line scans, multispectral, hyperspectral, or infrared bands to ascertain better what it is looking at. All of these are done using a central machine, which utilizes a Central Processing Unit, Graphics Processing Unit, or Field Programmable Gate Array.
Depending on how much information must be analyzed, these machines can pack quite a bit of horsepower, but this can widely vary between machines that make a few analyses per minute versus a few per hour.
In all of these situations, the computer is applying various filters to examine the image to highlight specific elements of them and understand details. In the case of a self-driving car, this is to recognize hazards to tell a driver when to slow down for a speed bump or maneuver around a pothole.
With our machine vision rig and high-quality lens, we provide the software with the visual information and fidelity it needs to understand what it is looking at.
What Are The Types of Automatic Vision Systems?
As all automatic vision machines have different purposes, their configuration can vary widely as well. As seen in Assembly Magazine, some of these include 1D vision systems that analyze a signal one line at a time for use in manufacturing materials like plastics, metals, and paper.
Of course, 2D systems build a larger image by scanning one line at a time on a moving conveyor belt, which is helpful for analyzing products like circuit boards. As expected, there are 3D systems as well, which employ several machine vision lenses, all pointing at the same object to measure both its surface and volume, to create a 3D scan of it.
Depending on your needs, you may use one of two solutions known as area scan and line scan. Line scan is more befitting of our 1D and 2D systems, as it provides high-fidelity lines that can then be compiled to assemble a full 2D image.
This also works well in tight spaces on conveyor lines or for products that continually move. Area scan captures a larger area all at once, which is best suited towards three-dimensional scans.
This type of scan is most often used for hardware manufacture, in which an item must be inspected to ensure it does not have any defects or imperfections. This is also crucial for environments in which we need to understand depth in an image, such as on a car’s internal computer or for monitoring employees in a warehouse.
As we’ve gone over, automatic vision systems are flexible, modular arrangements that are made to be adapted for specific circumstances. This allows the technology to be employed in a wide variety of situations for all types of manufacture and other settings like on self-driving vehicles or in warehouse safety monitoring. Despite these variances, though, there are parts of the technology that will remain constant.
Imaging systems still need proper lighting, high-quality lenses, and image sensors in order to capture frames and then require a computer with software to send them to. Our images are analyzed so to examine if an employee in our warehouse isn’t wearing their helmet or if the product fresh off the assembly has a defect.
Of course, the technologies in both hardware and software continue to evolve, helping push the medium to new heights where further use cases can be uncovered for it. In the future, automatic vision may become a feature of our homes, with refrigerators inspecting food to detect if it is safe for eating or doorbells greeting visitors by name.