Foundations for the responsible use of AI
How can AI be used responsibly? What foundations do companies need to lay? The use of artificial intelligence (AI) to establish new business models, analyze large volumes of data and automate business processes offers enormous potential for companies and is playing an increasingly important role in the development of embedded systems. However, legal and ethical standards must be taken into account from the outset.
Control, traceability and transparency in dealing with artificial intelligence
Standards for data use and analysis by AIPrecisely because the potential of AI appears to be limitless, we need to consider how it will impact our society and address issues around morality and risk in this context. Only when companies fully understand the AI they are using and the decisions they are making can they maintain control over their algorithms.
It is essential, therefore, that companies harnessing AI make its use as open and understandable for stakeholders. This helps reduce fears and prejudices about the use of algorithms and analytics based on data. If you want to retain the trust of both your customers and your employees, you should first create general acceptance for these systems.
To this end, it makes sense to involve all decisive departments in this process, as AI has potential benefits for many activities from sales to marketing, employee enagement and customer service.
The legal department can create standards and check internal regulations on data use and analysis for compatibility with government guidelines and laws.
A key role is played by the HR department as an intermediary between management and employees. It can review appraisal processes and training proposals and should be consulted on the basis on which personnel decisions are made.
Finally, sales can leverage the value of AI to create a competitive product offering. To do so, it should be trained to understand what opportunities and challenges arise from the use of AI in sales and marketing.
AI systems are not unbiasedBasically, AI systems use certain features to trigger predefined actions. Algorithm-based models are never completely unbiased, because the data on which they are based are assessed and categorized according to predefined criteria. Legal and ethical standards therefore play a crucial role in defining the predefined actions.
For example, a person's hometown is, in and of itself, just a fragment of data. But if, on the basis of this fragment, the AI discriminates against a customer, for example, because he or she is presumed to have a lower income and is therefore less profitable, this is unfair and unethical discrimination. However, if this characteristic is used to play music on hold for the customer to remind them of home, this may have a positive impact on the customer experience.
Once the AI systems are ready, it is important to work permanently on optimizing the algorithms. And, these should be based on a wide-enough data set to reduce bias.
Responsible use of AI
A responsible approach to AI systems thus rests on three pillars:
- Control of results produced by AI using guidelines;
- Traceability of AI decisions from data collection to action;
- Transparency regarding data collection and analysis by AI.
Companies remain responsible for the decisions made on the basis of the algorithms throughout the entire lifecycle of an AI system. This not only implies a legal responsibility, but also an ethical one.
In Track 8 of the embedded world Conference 2023 you will find presentations around the topic of using AI in embedded systems.
Source: Read original by Heinrich Welter, Genesys, on elektroniknet.de