Supplier eBooks

Intel - Reimagining What's Next

Issue link: https://resources.mouser.com/i/1437726

Contents of this Issue

Navigation

Page 27 of 37

28 REIMAGINING WHAT'S NEXT The initial version of the Firefly uses a 1.6MP Sony Pregius CMOS image sensor. This 60-FPS global shutter sensor features excellent imaging performance, even in challenging lighting conditions. Future iterations of the Firefly camera will offer increased flexibility with additional sensor options. The Intel Movidius Myriad 2 VPU is a system-on-chip (SoC) design that enables high-performance, on-camera image processing and inference (Figure 3). Key features of the VPU include the following: • Hardware accelerators for image processing are purpose-built for imaging and computer vision. • 16 streaming hybrid architecture vector engine (SHAVE) processor cores accelerate on-camera inference based on DNNs, including vector data processing that is more optimized for the branching logic of neural networks than the more general- purpose cores found in graphics processing units (CPUs). • General-purpose RISC CPU cores support interaction will external systems, parse and schedule workload processing on the SHAVE processor cores, and execute the actual on-camera inferences. The advanced firmware that ships with the Firefly adds significant value. Key firmware machine-vision features include the Vision protocol, eight- and 16-bit raw pixel formats, pixel binning, and selectable region of interest. In addition, the firmware offers control of the four GM ports, allowing other systems to trigger the camera, as well as enabling the camera to trigger external equipment such as lighting, actuators, or other cameras. Use Cases Artificial lntelligence is disruptive in the machine-vision field because of its ability to answer questions that require judgment, which is to say the estimates could not have been defined specifically based on preset rules. Deep neural networks are trained on large amounts of sample data, and the resultant trained model is then uploaded to the Firefly camera. Figure 4 illustrates examples of use cases enabled by on-camera execution of deep neural networks. • Robotic guidance can help industrial, healthcare, and consumer robots interact in more sophisticated ways with objects, including avoiding obstacles when navigating unfamiliar spaces. • Quality inspection can be automated and sophisticated, such as gauging whether variations in a pattern are acceptable in a textile manufacturing scenario. • Biometric recognition based on inputs such as face, thumbprint, or iris scans can be used to govern access authorization for facilities, computer systems, or other resources. • Precision agriculture can draw on the analysis of crop-condition images taken in the visible and infrared spectrums to guide efficient application of herbicides and pesticides. • Medical imaging implementations include histology usages to flag anomalies in biopsies as a first-pass screening or as a fail-safe measure to identify false negatives after standard reads by medical personnel. Figure 4 Example use cases for on-camera inference. Figure 3: Pass/fail quality inferences during inspection of manufactured parts with the prototype for the FuRo Pima camera.

Articles in this issue

view archives of Supplier eBooks - Intel - Reimagining What's Next