Prof. Dr. Gunnar Stevens

Many people are unaware of the security risks of the internet. Prof. Dr. Gunnar Stevens and his team at the University of Siegen are working in several research projects examining how in particular young and elderly people can protect themselves from the dangers of data misuse in their daily lives.
Prof. Dr. Gunnar Stevens, Head of IT Security and Consumer Informatics, and many other colleagues at the University of Siegen are looking at how to give people an understanding of the dangers of the internet and how to make them media-savvy. One of the research is how smart speakers and other smart home devices can be made more secure. The problem is that, as a rule, people use these devices without thinking much about privacy. »But actually, a smart speaker harvests a whole lot of information about users, their preferences, and their interests,« says Gunnar Stevens. What’s more, when they’re switched on, the devices transmit snatches of conversation and ambient noises to the provider’s server. In the checkMyVA research project, Stevens analyzed these recordings – e.g. snippets of conversations picked up during a party – to find out how far they reveal information on specific topics or intimate and confidential content. »We don’t know in any detail what happens to this information,« says Gunnar Stevens. »It’s like a closed book.« In a study conducted within check- MyVA, participants reported that a few hours after conver- sations about certain subjects, they had received targeted ads for corresponding products.
In the new SAM-Smart joint project, Gunnar Stevens wants to find ways to ensure the risks are explained to users in a concise and easy-to-understand form. One idea is, for example, to display the information on a screen, the user’s smartphone, or their TV in graphic form combined with concrete tips for privacy settings. Other research within the project will examine voice assistant systems. The goal is to enable users in future to directly ask their smart devices about privacy settings.

»For a long time, IT security was seen as a purely technical problem,« says Gunnar Stevens. He points out that the human element was simply removed from the equation or considered a risk. Developers assumed the »dumbest conceivable user«. »But we see people as part of the security architecture. We want to give them the ability to contrib- ute to security« – for example by developing self-explanatory technology, or by educating users.
This is also the aim of the EU Marie Skłodowska-Curie Innovative Training Network GECKO (building GrEener and more sustainable soCieties by filling the Knowledge gap in social science and engineering to enable responsible artificial intelligence co-creatiOn) initiated by Stevens, which aims to improve decision-making by combining generative and cognitive systems and establish artificial intelligence as a helpful tool for humans. It is not only about technical innovations, but also about the development of explainable and trustworthy AI solutions that can be used successfully both in practice and in complex decision-making contexts. The disciplines of (socio-)computer science, cognitive science, psychology and economics work closely together, as only through an interdisciplinary approach can a deep understanding of human decision-making processes be developed and AI systems created that really offer practical benefits for users.
GECKO investigates how AI-based systems can use generative models to generate suggestions or scenarios that support a user’s decision-making process. This could take the form of recommendations or alternative courses of action that expand the decision-making space and help users to make informed decisions.
Another focus of the project is to understand how cognitive processes (such as perception, attention and memory) flow into decision-making processes and how these can be simulated by AI models. The aim is to gain a better understanding of how humans make decisions and how AI systems can improve or support these decision-making processes. An important aspect of GECKO is that the AI models should be designed to be explainable and transparent. This means that the systems not only make recommendations, but also make it clear how they arrived at their suggestions. In this way, users should be able to understand the decision-making aids and evaluate them independently.
GECKO is aimed at practical areas of application in which decision-making plays a central role. This could be the case in business, medicine, psychology or even in everyday life, e.g. in supporting doctors in finding a diagnosis, in management decisions or in the selection of services.