So far we have seen so many machine learning (ML) papers in computer vision, from object detections to activity recognitions. There have been tons of papers doing the good, and the same that many are doing the bad thing, namely adversarial ML. Many papers are now following the trend to attack existing well-trained ML models. One example is by wearing a glass with special colors designed, the guy can be recognized as someone else, by machines (shown as the figure below). You can find many many papers on this topic.
While interesting, it leads to the thinking that adversarial ML is actually applicable to many different fields too, for example, in the wireless domain. Taking the example from activity recognitions via wireless signals, from the privacy perspective, a user can possibly wear a small device that emits wireless signals to disturb the activity tracker. Although anti-sensing works exist, most convincing one still uses highly advanced full-duplex devices.
Other than privacy that may use the bad for the good, we may also do the bad things, which follow a traditional domain of adversarial behaviors. For example, from the idea of injecting several pixels to fool the ML detector on photos, we may also think about injecting wireless signals to fool the ML detector on wireless signals. So far I have not yet seen much work on this, mainly because there exist almost no known publications in ML-based wireless signal detector at the first place. Well, while this idea can be too ahead of the time, it is definitely something people should think about and to explore about.