Skip to content

Trust in Artificial Intelligence

How we create and use trustworthy AI systems

by Sandra Wartner, MSc

Artificial intelligence (AI) already supports us in our everyday lives, consciously or unconsciously, in many areas. We act with automated assistants such as voice assistants or the parking aid in our cars, use facial recognition to unlock our smartphones, and let music and movies be suggested to us. AI is already very present in our everyday lives – perhaps even more than we realize. The number of AI applications will continue to grow in the future. As we create and use AI systems in the future, we need to be aware not only of the opportunities, but also the risks and challenges of using them. Guidelines can help us create AI systems that are not only efficient, but also ethical, safe, and trustworthy.

Table of contents

  • AI on the rise
  • Current challenges and problems
  • Funding
  • Guidelines for creating trustworthy AI systems
  • Conclusion
  • Author
AI

AI on the rise

The advantages and areas of application of AI are multifaceted. It is easier to understand with a few simple examples:

  • Recommendation systems try to learn our tastes from our previous choices and then suggest content we like. Streaming providers like Spotify or Netflix have been relying on this kind of AI for years.
  • Natural Language Processing (NLP) methods help us understand or generate texts. This makes it possible, for example, to automatically translate texts in such high quality that these texts are often indistinguishable from texts written by humans.
  • In industry, machines can be maintained at an early stage through the application of AI, thereby reducing or completely preventing failures. Such applications are referred to as predictive maintenance. In general, predictive analytics can be used to predict machine failures, the quality of manufactured products, or even orders or order quantities. Prescriptive analytics – a current field of research as an extension of predictive analytics – attempts to make optimal decisions on the basis of predictions, causal relationships and domain knowledge.
  • In medicine, AI systems can support medical professionals in both diagnosis and treatment. For example, tumors or diseases can be detected automatically on the basis of image data, diagnoses can be suggested on the basis of patient data and symptoms, and disease progression can be analyzed and predicted.
  • AI systems in cars enable automatic traffic sign recognition, help to keep in lane and even enable autonomous driving.
AI

Current challenges and problems

However, the use of AI systems also brings some disadvantages and risks with it. Systems that are not fully developed or have not been tested enough can be prone to errors. If the AI used by the streaming provider for user profiling is not good, you as a user will get suggested content that does not interest you or that you do not want to watch. If qualities or outages are predicted incorrectly, it can lead to outages, lost revenue and increased costs. When AI is applied in more sensitive areas, there can be even more tragic negative consequences. Consider, for example, an autonomous driving vehicle whose AI is poor at distinguishing people from objects. Then, in the worst case scenario, the vehicle will not avoid the person and hit the trash can, but the other way around. In addition to the possible susceptibility of AI systems to errors, other aspects also need to be considered. Currently, ethical aspects, data protection and the transparency of AI systems are the main topics of discussion.

In addition to the possible susceptibility of AI systems to errors, other aspects also need to be considered. Currently, ethical aspects, data protection and the transparency of AI systems are the main topics of discussion. Errors and inaccuracies in the data used, the implementation or the interpretation of the results can lead to people or groups being discriminated against, to them being restricted in their ability to make decisions or to the system causing harm in some other way. A well-known example is the following: Bias (distortions) in the data causes models trained on the data to also reflect that bias. A really good model eventually learns to decide as humans would. If the data on which the model is based are racist, i.e., if these data represent racist decisions made by humans, the model will also make racist decisions. A really good model like the one we would want as a society – one that is secure and trustworthy – would not be able to meet the challenges presented in the data or provide us with a way to prevent it.

In addition to data quality, the interpretability of AI models is often a major challenge. In black box models, the decision of the AI system cannot be understood or can only be understood very imprecisely. However, this would be extremely important in terms of transparency and trustworthiness of the system. Imagine an AI deciding whether to get a loan or not. In case of rejection, you now want to know why you were assessed the way you were. You even have the right to know why an AI made a decision about you. Here, a major current problem becomes apparent: If you want to be able to understand the decisions of an AI, you cannot use models whose results cannot be explained completely – even if these models are better in their assessment than others.

Regardless of whether an AI makes good decisions about you and whether they can be interpreted: Maybe as a human you just don‘t want to be judged by an AI. A topic often discussed within this context is the so-called social profiling. To use the example of lending again as an illustration: If an AI automatically divides applicants into cohorts, you may be denied a loan based on your age or origin, but given one to comparable candidates. Such ethical challenges are being addressed by various institutions, research organizations and NGOs. Amnesty International, for example, provides examples of how AI systems can pose a threat to our fundamental rights.

Guidelines for creating trustworthy AI systems

The previously mentioned questions are being addressed by different organizations/institutions and from different directions. In order to be able to develop AI systems in such a way that they are robust and transparent and do not pose a threat to us, but rather support us and help us wherever and whenever we as humans reach our limits, various guidelines have already been developed. The goal here is that we as a society can trust AI and its decisions. A well-known example is the Ethical Guidelines for Trustworthy AI published by the EU in April 2019. According to these, AI systems should be developed and deployed in compliance with various ethical principles. In addition to fairness and harm prevention (AI systems should not cause harm or have a negative impact in any other way), the ethical principles of respect for human autonomy and explainability are described in the context of AI systems. Explainability in this context includes not only the comprehensibility of the generated results (keyword: black box models), but also transparency with respect to all processes, the capabilities and the purpose of the AI system. To preserve human autonomy, human supervision and control over the AI system and all processes have to be ensured. Accordingly, AI systems should not subordinate, enforce, condition or do anything similar against humans but AI should support and complement human capabilities.

The Guidelines set out various requirements for AI systems to preserve ethical principles and establish trust, and suggest various technical and non-technical methods for implementation. Below you can see a list of all the requirements for trustworthy AI.

  • Priority of human action and human supervision: AI systems should support human autonomy and decision-making and be able to be supervised by humans. The less supervision a human can have over an AI system, the more extensively it must be tested before and the more strict the guidance and control must be.
  • Technical robustness and security: For a system to be considered robust, it must work properly and the results must be accurate and reproducible. Security includes risk prevention, the creation of fallback plans and the ensurance of protection against attacks and misappropriation.
  • Privacy and data quality management: User data must be protected and not used to discriminate against other users. Data quality management is necessary to ensure the quality of data, documentation and to control over access.
  • Transparency: Only when a system and its results are traceable, i.e., verifiable and explainable and the capabilities and limitations of the system are clearly communicated and defined, then the system can be called fully transparent.
  • Diversity, non-discrimination and fairness: All stakeholders must be involved and treated equally during the lifetime of the AI system. It is important that bias is avoided and that the system is designed to be as barrier-free and user-oriented as possible.
  • Social and environmental well-being: An AI should be used for the benefit of all people and during its lifetime, the society, other sentient beings and the environment should be considered as stakeholders. This includes sustainability, environmental friendliness, social awareness, and the preservation of democracy.
  • Accountability: Provisions must be made to ensure responsibility and accountability for AI systems and their results. Vulnerable persons must be given special consideration and adequate legal protection must be provided in case that adverse effects occur.

Conclusion

In the future, it will be even more important for the development of good and safe AI systems that there are no negative effects and that all decisions are also comprehensible for us humans. Otherwise, people will place little trust in AI. And if there is a lack of trust, there will also be a lack of acceptance of such intelligent systems in our society. It is particularly important that both developers and users know and understand the opportunities of AI systems, but also their risks and limitations. Through this and through adherence to guidelines, mindfulness in development and the use of human-centered development and design principles, we can create AI systems that are capable for the future.

RISC Software GmbH is happy to support you in the submission and implementation of (research) projects in the field of Trusted AI.

Contact









    Author

    Sandra Wartner, MSc

    Data Scientist

    This site is registered on wpml.org as a development site.