Issue link: https://resources.mouser.com/i/1481902
Intelligence at the Edge 8 SUPERVISED LEARNING Supervised Learning is the most common form of ML. An algorithm must be trained before it can infer, and supervised learning is performed with labelled training data. The two main categories of supervised learning are Classification and Regression. Classification models make discrete ("yes" or "no") predictions. The training labels specify classes of associated signals. For electric motor monitoring, the signal is motor control current, and the classes or "states" may include nominal operation, shaft misalignment, clogged filter, and bearing problem. The ML algorithm determines how similar (normal) or dissimilar (abnormal) the state of the motor is compared to the states learned from training. Regression models predict quantities instead of classes by establishing the correlation between one or more independent variables (causes) and a dependent variable (effect), such as the relationship between height and weight or time and temperature. They predict "How much?" or "How many?" quantities such as price, temperature, sales, dosage, growth, risk, and rainfall, according to independent variables such as age, weight, activity level, interest rate, symptoms, humidity, and sunlight. Figure 4: ML model performance analysis. (Source: Mouser Electronics) UNSUPERVISED LEARNING Unsupervised learning uses unlabeled signals for training, to find similarities and differences between data points. The algorithm groups signals it receives into representative clusters, which are then used after training to determine if new signals are usual or unusual. Unsupervised learning is used when examples of abnormal system behavior are not readily available for training. This powerful ML method is the future of anomaly detection and predictive maintenance, where algorithms are simply trained with nominal data on deployed machines. REINFORCEMENT LEARNING Reinforcement learning uses a feedback algorithm that informs the model being trained that it is headed on the right path. Instead of learning from labeled data, RL provides real-time decision feedback so the model can learn from its own experiences. Decisions that approach intend goals are reinforced to guide learning. Smaller rewards for intermediate goals and larger rewards for end goals support multi-step decision making strategies. This requires an environment that interacts with RL algorithm decisions in a real-world manner, and that can be run indefinitely for iterative trial-and-error learning. Although RL is an emerging and exciting topic, it is beyond the scope of this eBook.