Skip to main content

AI-Driven AR Melds Manufacturing and Data

Image Source: onlyyouqj/Stock.adobe.com

By Mario Sheppard for Mouser Electronics

Published May 10, 2022

Augmented reality (AR) is a software–hardware fusion that enables creators to display information and 3D models digitally in the real world, giving users the sense that they are actually there. AR is not new, but the hardware and powerful software needed to use it conveniently are becoming more common. With this level of accessibility, AR is gaining use in manufacturing environments to solve many information access challenges.

The next evolution of AR includes pairing it with artificial intelligence (AI). This combination establishes a relationship between real and digital objects by locating and tracking specific surfaces and points of interest. Thus, not only can AI-driven AR display information and models, but it can now also aid in discovering new objects, reading text, and making decisions based on what is happening in the environment. Rather than AR only being an aid for humans, it has the potential to serve as a raw data collection stream for AI and machine learning. This technology has begun to change the ecosystems of video games and the sale of online products, but AI-driven AR has the potential to provide even better information accessibility and solutions in industrial manufacturing environments.

Challenges in Accessing Information

Manufacturing requires speed, precision, consistency, quality, and traceability of raw materials, individual components, and finished goods. At any given time, information is flowing in every direction and constantly changing. Unfortunately, the data are not always available in the places workers need it most.

In-process controls—for example, standard work instructions—ensure that users perform tasks the same way every time. Often, these process documents live in a binder or are posted on a wall near the work area. Even when tablets are used for digital instructions, users must hold the tablet, lift parts, walk around, and complete tasks at the same time. With these constraints, work instructions are instead used as training documents or as an after-the-fact reference following an incident rather than for task-oriented work.

In logistics, inventory control poses many challenges, especially when the organization must manage thousands of parts coming and going. It is not uncommon for the expected inventory and actual inventory to vary. To combat this potential for discrepancy, companies conduct periodic inventories, which is a valid but tedious, time-consuming, and costly process.

Effective communication channels are also important, but they are hampered by large, noisy, often-separate areas. Email is not necessarily instant, nor does it present a solution for workers who do not have corporate email addresses or access to technology. Cell phones are an option, but in some industrial spaces, they can present a safety hazard—not to mention that cell phone use is a sore subject for many businesses, which view the devices as a distraction.

AR is already helping solve these challenges with the help of cell phones, tablets, and laptops. AR works by using a visual sensor or camera, a display device, and software that presents information and 3D models—all of which can be found in most modern mobile devices. The processing power of current cell phones can outperform laptops, which means that every person with a cell phone has access to this emerging technology.

The Potential of AI-Driven AR to Solve Problems

The combination of AR with AI could provide even greater results. Consider that without AI, the information displayed would be superimposed over an arbitrary real-world view from the device’s camera—the equivalent of subtitles overlayed onto a movie. The text can be there, with or without the video, because there is no direct interaction or connection between the two.

By contrast, AI-driven AR does not depend on human input to acquire new and accurate information about the environment. Such systems could recognize surfaces and establish anchor points for digital text and objects to reference and interact with. Whereas AR places a chair on the floor of a room, AI locates the surface of the floor by using a camera and sensors. After AI has identified the surface, it places the object and tracks its position as the user moves to other areas. The system can see for itself and display the real-time background as the user moves through a space.

Within manufacturing environments, this means that the background environment and the information displayed for users would be in sync and in real-time. That’s significant because the AI-driven AR system could:

  • Display expected inventory values at each location in real-time and verify whenever someone accesses the location.
  • Help a robotic vision system count inventory, forwarding an image to a human if the robot detects a discrepancy or needs secondary verification.
  • Monitor repetitive tasks and inform workers if they do not complete tasks correctly.
  • Enable a system to track and verify inventory digitally to validate a part's dimension and critical features.
  • Identify discrepancies and discover the root cause of those issues.

The Future of Manufacturing with AI-Driven AR

AR paired with AI could help solve many problems with the human–machine interface. The way information is displayed and communicated could be instant and prioritized by urgency rather than proximity to a human’s location or line of sight. This type of interaction would create the perfect human–machine symbiosis. Further, AI-driven AR will likely be a key technology in driving manufacturing businesses into Industry 5.0, which focuses on product customization and human–machine collaboration. Companies that recognize technology as an extension of human capability will have to understand that harmonious communication and decision-making are paramount. An industrial environment in which information is in real-time and part of users' field of vision is within arm’s reach.

About the Author

Mario Sheppard is a United States Navy submarine veteran who is currently the lead engineer for automation and manufacturing technology at Supernal, the e-VTOL division of Hyundai Motor Group. Over the years he’s lead successful automation-robotics projects in a diverse group of industries, with most of his experience coming from automotive. He specializes in hardware and software integration, programming, technology development, machine vision, operations management, and manufacturing planning. Mario has bachelor's degrees in Electrical and Mechatronics Engineering and is an MBA candidate at Louisiana State University.

Profile Photo of Mario Sheppard