Moving machine vision into the third-dimension

19 April 2022

Find out how true 3D data, with algorithms specifically designed for three-dimensional measurements can result in fewer false positives/negatives.

Fully-rendered 3D objects generated by hardware and algorithms intentionally engineered to operate in a true 3D space.
Fully-rendered 3D objects generated by hardware and algorithms intentionally engineered to operate in a true 3D space.

Machine vision systems have traditionally inspected objects by reducing the three-dimensional ‘real world’ to two-dimensional flat images. Through careful choice of software, hardware, and calibration, width and length dimensioning data can be extracted from 2D machine vision images.

However, for many industrial applications that involve inspecting or assembling objects, height information must also be considered to succeed. 3D machine vision is able to capture and process width, length, and height information from objects in order to address these applications. 

The vast majority of industrial applications involve guiding, identifying, gauging, or inspecting objects that are 3D. For these tasks, machine vision system designers struggle with how much cost and complexity to add to the system to achieve their automated inspection or assembly goals. For example, can the key measurement be made, or assembly be performed using only a 2D image of a 3D object? If a measurement involves an object that has significant variation in height across the part, then height information must be considered for an accurate measurement. 

The next question is how much height information does the system really need? Can “minimal” height information be collected – for example, pixel height as perceived from the camera’s location? This produces accurate height measurements of the first point along the optical axis (2.5D height maps), but accuracy decreases as the pixel moves farther from the axis due to perceived distortion. This can also lead to scenarios where some points are occluded or concealed from the camera.

Today, presenting data as 3D point clouds offers a solution to avoiding perceived distortion and for capturing multiple Z values for every X, Y value – an important consideration for convex parts, for example. Although system designers are taking advantage of the lower costs of 3D hardware to solve complex part inspection and assembly tasks, these same solutions suffer from a lack of 3D algorithms and tools designed specifically for high-density 3D points clouds, often referred to as true 3D. Now, advanced image processing solutions are combining cost-effective 3D imaging hardware with powerful 3D algorithms. This integration results in greater accuracy and enables the location of hidden or occluded defects. It also makes the operator’s job easier by displaying fully-rendered 3D objects instead of 2.5D images where height is conveyed by colour.

Reveal hidden features
Today’s automation engineers can use sensors to quickly determine the height of points in a scene and use that data to create a richer description of a 2D image. The addition of this height representation takes data beyond 2D to 2.5D. 2.5D data offers more than pure 2D data, but it only represents a portion of what humans can see. 

Generally speaking, systems that visualise data in 2.5D encode height information using false colour, similar to the way a thermal image presents temperature information through different coloured pixels. These maps limit a vision system’s capabilities because they are difficult to interpret and make it challenging for operators to process the details of a 3D part. 

Furthermore, height maps cannot accommodate rotation and tilting with the same accuracy as a full 3D point cloud because 2.5D data is tied to the viewing direction of the receiving sensor and the underlying 2D grid (e.g. conveyor surface). This leads to the possible introduction of perceived distortions and the potential for lower accuracy vision inspections due to the introduction of perspective distortion.

A true 3D vision system instead provides a point cloud, which is a collection of data points from a scene that is independent from the receiver or the underlying grid, allowing the system to fully render the object as it appears in the real world. Consider this – in a 2.5D representation, every X,Y position on the grid will only have one Z representing the height. In a 3D point cloud, all points have independent X,Y,Z values, so any X,Y position can have multiple Z values.

This additional information enables accurate representations of shapes with hidden features, such as concave objects. Processing data in 3D allows for information taken from different perspectives to be fused to inspect features, such as indentations and screw holes, while providing a realistic depiction of an object for more intuitive operator analysis. 

Using a 3D point cloud also provides flexibility. 3D images can be used to create 2D images from their pixel intensity values and from multiple perspectives. They can also be used to create highly accurate 2.5D false colour images.

Attempting to merge two different 2.5D images of the same object taken from different perspectives is a non-trivial task for software. Additionally, if two pixels from two separate images are close to each other, the system may have to interpolate Z values due to differences in the relationship between the underlying grids. However, when using 3D data, it is easier for the software to combine the two datasets in a common global coordinate space for inspection, assembly, or post-inspection rework operations. This can be done at minimal computational cost and without introducing computational defects in the scene. In short, a true 3D point cloud is a ‘super set’ of data that allows for results in different dimensions and can be used with various tool sets.

Building true 3D tools
Despite the benefits of a true 3D point cloud when making automated measurements, adoption of 3D systems for machine vision inspection has been slow due to concerns about cost, complexity, and performance when compared to 2D inspection solutions. A true 3D integrated solution can address these obstacles.

True 3D algorithms are designed for 3D geometric structures – such as cuboids, spheres, and cylinders.

These geometric structures lend themselves to the defining subsets of the 3D point cloud as regions of interest, allowing for regions of interest to be both simple and effective in 3D. Conversely, designers must account for the distortion and hidden features that may occur when using a 2D algorithm on a 2.5D image. This requires the region of interest to be aligned with the supporting surface under the object.

Consider the case of making an arc measurement of a convex part. A 2.5D image interpolates the data it cannot directly access. Accuracy differences between a 2.5D tool that is interpolating and estimating height values could be completely right or wrong depending on if it’s a convex- or concave-shaped part. Similarly, applying a 2D circle tool to a 3D data set will result in an oval, not a circle region. As illustrated here, the ability for true 3D data sets to represent multiple height values for each X,Y location and apply tools designed for 3D environments simplifies system design while infusing accuracy into the final results. 

Recent advances in semiconductors and sensors are driving down the cost of acquiring a machine vision image and this is enabling designers to add more cameras to their machine vision solutions, giving them rich dimensional data to make more accurate measurements and improve assembly. To handle the demands of these increasingly complex automation tasks in a 3D world, designers need access to true 3D algorithmic tools.

Much like colour machine vision 15 years ago, the world is adopting 3D machine vision because it mimics human perception, providing intuitive ways for people to interact with modern machinery. These future 3D vision systems will require multiple sensors and dense true 3D point clouds to handle more demanding assembly and inspection tasks. 

This article was taken from a Cognex whitepaper entitled ‘True 3D Vision for improved accuracy, ease of use over traditional SE 2.5D solution’ 

Contact Details and Archive...

Related Articles...

Print this page | E-mail this page