Issue link: https://resources.mouser.com/i/1499313
Renesas 2023 5 Making Smart Devices Smarter— but Not without Trade-Offs AI is revolutionizing the way that smart devices operate, enabling them to make decisions without relying on external systems for input. This is beneficial in many ways—the data can stay on the device, which is safer and more private. The devices are also capable of making decisions based on the data without the need for immediate connectivity, which is helpful for endpoints that may have only sporadic access to a connection. However, this comes with trade-offs: AI models can be slower to run and consume more power depending on the model and size. This can be a major limiting factor for AI on edge devices because for machine learning (ML) to be effective in those applications, it must be small and fast enough to run on an edge device. Therefore, although AI-enabled devices can be more powerful, they come with trade-offs that engineers must consider. Balancing Improved Intelligence with Impact on Performance Design engineers should consider the ROI when deciding whether to add AI to an IoT endpoint. Before embarking on the decision to embed ML in these endpoints, the expected value should be defined and quantified as much as possible. Doing so helps not only to understand how the costs stack up compared to expected gains but also to inform the criteria for success of the project (i.e., accuracy of the model or some other metric). This will make it easier to determine if the value of the AI outweighs the impact on the device's performance. In most cases, ML algorithms require more computing power and more energy, which can reduce the device's battery life or noticeably degrade its performance in terms of latency. If the device is already running at maximum capacity, the ML algorithms may not be able to run at a satisfactory level. Thus, design engineers should examine the use case of the device carefully to determine if AI would be beneficial and not too much of an additional strain on the device's resources. For devices that have constant access to a power source, the additional cost of computation at the source becomes more of an issue than battery life. In addition to evaluating ROI versus device performance, design engineers should examine whether the task can be achieved with ML. Given the hype around AI, applying it to all use cases without thorough investigation is tempting. However, conducting research on prevailing methods in the field and applications that continue to elude satisfactory solutions from ML could reveal potential issues before starting the process of adding AI to IoT endpoints. At this point, design engineers should also determine if the AI capabilities can be achieved without ML and if the use case is better suited for an AI-enabled device or a simpler alternative solution. Typically, the first place to start in this process is to review the literature and identify which methods have been used successfully before. Then, using that information as a guide, build a baseline proof of concept using simple, rule- based approaches or less sophisticated algorithmic implementations. This baseline can be used to assess how well a non-ML solution might work; if it does not meet success criteria for performance, then progressing to an ML-based solution may be in order. Assessing Feasibility among All the Constraints Assuming that the value expected by introducing ML outweighs the costs and potential performance impacts, the next step in the decision-making process is to assess whether the ML lifecycle is possible within the current technical or physical constraints of the devices. IoT endpoints that are not normally Wi-Fi enabled and that produce vast quantities of unlabeled data stored for only short periods of time at the edge will score much lower on the feasibility scale than those with cloud access, where data can be pooled and annotated more easily and where compute power is more readily available. In particular, since AI generally requires some training that involves the model having access to enough data to learn from, how this training will be done is an aspect of feasibility. For example, putting ML in an endpoint, though not impossible, is significantly less feasible if the data are distributed and cannot be pooled for training a model owing to privacy, storage, or connectivity requirements and constraints. In these cases, training could occur in a federated fashion, but the cost and complexity of doing so could outweigh the benefits. AI models can be slower to run and consume more power depending on the model and size. " "