Explanability and Transparency
How can we make automated decisions of artificial intelligence comprehensible for users?
More and more decisions in our working and living environment are made automatically by machines with artificial intelligence (AI). In order to prevent this increasing automation from being accompanied by the loss of democratic control and social acceptance of AI technologies, politics, science and business must jointly develop strategies to ensure the explainability and transparency of algorithmic decisions.
Permanent security
How can we design artificial intelligence systems in a safe way and exclude critical malfunctions?
Whether in autonomous driving, in medical imaging or in the connected industry: With its increasing diffusion, artificial intelligence (AI) technologies are also increasingly entering safety-critical areas. These systems learn to adapt to different situations. A decisive challenge is to ensure that they do not learn the "wrong" things in order to rule out critical malfunctions of these technologies.
Surveillance and self-determination
How do we prevent the government and companies from using artificial intelligence to surveile us?
With the introduction of artificial intelligence (AI) technologies into our mobile phones, smartwatches or public video surveillance, the risk of comprehensive surveillance by state and private actors increases. Face and behavior recognition technologies in particular can be misused for this purpose. A crucial current and future challenge is therefore to protect the fundamental rights of citizens so that they can move freely and self-determinedly in a world permeated by AI.
Limits of action
Which decisions may and may not be made automatically by systems with artificial intelligence in the future? And what are the conditions for it?
Artificial intelligence (AI) systems will be capable of a high degree of autonomy in the near future. With the increasing complexity of the systems, their decision criteria will become increasingly difficult to comprehend or time-consuming to reconstruct. This is another reason why regulations are needed as to which responsibility may be assigned to these systems under which conditions as well as which decisions should in principle be restricted to humans.
Integration of different forms of learning
How can different forms of machine learning be efficiently combined?
At present, machine learning is primarily understood as pattern recognition in large amounts of data, the results of which are often difficult or impossible for humans to comprehend. Alternatively, there are interpretable methods of machine learning such as classical decision trees and inductive programming. Inductive logical programming offers a natural combination of learning and knowledge-based methods of artificial intelligence. A combination of the different methods can help to develop more data-efficient approaches to machine learning that are robust and transparent.
Discrimination through algorithms
How can we prevent machine-made discrimination?
Whether in the assessment of creditworthiness or the selection of applicants, the increasingly efficient collection and processing of data by artificial intelligence (AI) systems offers the opportunity to automatically make decisions free of human prejudice and discrimination. Nevertheless, under the cover of seemingly data-driven objectivity, AI algorithms can also help to reinforce existing discrimination or even create new ones. It is the task of business, science and politics to prevent this.
Resource efficiency
What new approaches can we develop to make Artificial Intelligence technologies less resource intensive?
Given the declining natural resources and the growing challenge of climate change, we need to develop Artificial Intelligence (AI) technologies that are more resource and energy efficient than today's solutions. This also includes the use of less data-intensive methods of machine learning and the development of intelligent methods to avoid the storage of irrelevant data. The ability to balance technological progress with the protection of our planet is crucial for the future of humanity.
Compliance with standards
How can we define uniform quality criteria for Artificial Intelligence and ensure compliance?
The security of Artificial Intelligence (AI) systems requires the development of quality standards. Ensuring compliance with these standards in practice at the technical and institutional levels is essential in order to achieve broad acceptance of AI among the population. To this end, new methods of technical quality control must be developed.
Responsibility and liability
Who bears the moral and legal responsibility for automated decisions of Artificial Intelligence?
Especially automated decisions, which are made using large amounts of data and complex neural networks, are in many cases not reconstructable for humans. In order to give security to developers and users of artificial intelligence (AI), it must be clarified who under which circumstances will be responsible for automated decisions of AI systems in the future.
Education
How can we anchor basic knowledge on artificial intelligence more firmly in general and vocational training?
Systems with Artificial Intelligence (AI) are becoming increasingly relevant in our living and working environments and are driving the digital transformation of our society. In order to strengthen the digital literacy of all people and to enable them to assess and evaluate current developments through AI, a basic understanding of AI must be better integrated into general education and vocational training.