AI and embedded systems
Ethical issues arise in the use of AI
Engineers rarely deal with ethics, a branch of philosophy that, at its simplest, deals with what we consider to be right and wrong. After all, much of what engineers deal with is black and white, functioning or non-functioning, with little room for gray zones. It could be argued that the natural desire to “do the right thing” is inherent in the engineering psyche, and thus, we are, ethically speaking, always trying to do good and improve the world.
For example, the developers of an automotive braking system are inherently focused on delivering a safe system that functions correctly under all conditions. Additionally, there are standards and checks in place that ensure the safety of the product that results. The same applies to industrial engineers developing robotic systems operating in close proximity to humans.
AI isn’t simply a new tool
So why can’t AI simply be incorporated into the engineering toolbox like other technologies before it? Well, AI and its other branches, such as machine learning (ML) and deep learning (DL), enable capabilities that, previously, could only have been implemented by humans. In the past decade alone, image recognition accuracy of DL tools has gone from 70% to around 98%. By comparison, humans average 95%. Such capability is often available as ready-to-use open-source models that anyone can use, and the hardware required is relatively cheap and easy to source.
Thus, the barrier to entry is very low. Suddenly, a task that would have required a human to review images can be done by a machine. In and of itself, this is no immediate threat and is comparable to building a robot that can replace a human assembly operator. The real issue is with the ease of scalability. Suddenly, thousands of images per second can be reviewed, with only financial investment and hardware availability being limiting factors. While this could benefit an optical inspection system by improving quality in a factory, it could also be deployed for nefarious use in authoritarian states.
The dual-use dilemma
This dual-use dilemma has existed for centuries. The humble butter knife is also a dagger, and a ticking clockwork mechanism can be the trigger for a bomb. There are always two types of users: those who use technology as intended and those who use or repurpose it for malevolent objectives.
Scientists have grappled with this issue often while developing viruses and dangerous chemicals. In their paper “Ethical and Philosophical Consideration of the Dual-use Dilemma in the Biological Sciences,” Miller and Selgelid discuss these issues at length. For example, should chemicals be developed that could cause mass destruction so that antidotes can be developed? And if such work is undertaken, should the results be shared fully with the research community, or should the outcomes be shared but in a manner that limits a reader’s ability to replicate the experiment?
Their paper provides some options for regulating dual-use experiments and sharing the resultant information. One extreme leaves the decision in the hands of those conducting the experiments, while on the other, it is up to governments to legislate. Research institutes and government or independent authorities are proposed as arbiters in the middle ground. The authors recommend these middle ways as the best approach, providing a balance between supporting the moral value of academic freedom and overriding it.
When considering AI related to embedded systems, this paper provides ideas on dealing with some of the ethical challenges.
Engineers must also be aware of the increased number of domains they touch with AI-driven technology. For example, ML algorithms can improve the safety of drones, enabling them to avoid collisions with objects or people. But that same hardware and software framework could also be reprogrammed with a little effort for nefarious or military purposes. The addition of face recognition technology could allow the device to attack and injure a human target autonomously. The ethical question based upon this potential use may be, are we obliged to implement a form of security that hinders the execution of non-authorized code?