How can IT systems be made more secure?
"The security risks for IT systems are manifold," says Jörn Müller-Quade. The professor of IT security heads the research group "Cryptography and Security" at the Institute for Information Security and Reliability (KASTEL) at the Karlsruhe Institute of Technology (KIT), and is director at the Research Centre for Information Technology (FZI). "The most well known are certainly malware and classic hacker attacks. But many attacks go unnoticed, such as when systems are spied on and information about their vulnerability is collected." These no longer come only from computer obsessives who want to try out their skills. Other groups also use them for various purposes: For example, criminals encrypt computers with ransomware, only to release them again for a ransom. The military apparatuses of states try to weaken the infrastructure of their opponents via digital means. And the secret services also use special programs, for example, to learn more about the economies of friend and foe.
Yet the attackers seem to have a fundamental advantage. "There is indeed an asymmetry in IT security," explains Jörn Müller-Quade. "The defenders have to close all security gaps, while the attackers only have to find one that is open." The only exception to this rule, adds the expert on encryption, would apply to cryptography. "Here we know since the revelations of Edward Snowden that even the NSA cut its teeth on modern encryption methods." However, and this too is shown by the revelations, the secret service got hold of the desired data by other means. For Jörn Müller-Quade, it is therefore clear: "The biggest challenge we face is overall system security." So, it is of little use if the heavy steel door is firmly locked with five bolts, but the window is half open.
This challenge grows all the more as system boundaries fall with increasing speed. This is because today it is no longer only computers and telephones that are connected to each other via the network. Power stations and industrial plants, refrigerators and televisions, or smart home systems and electricity meters also exchange information with each other. "We should not network everything that can theoretically be networked," says the IT expert. "Because many people don't see the scalability of cyber attacks." What he means by this is easily explained with a trip to the classic gangster milieu: In the offline world, the number of burglaries scales with the number of burglars. Because even the fastest thief can only break into a certain number of buildings per night. "With cyber attacks, this is no longer the case," says Jörn Müller-Quade. "Here, there are hardly any resource limits for a capable attacker."
The perpetrator does not always have to provide the resources for an attack himself as he has often hijacked computer systems of unsuspecting users all over the world which are working for him. Attacks of this kind include overload attacks, known in technical jargon as distributed denial of service attacks.
You don't just catch flies with honey
"In such attacks, the attacker floods the victim's system with enormous data traffic," explains Christian Rossow. The professor of computer security heads the System Security Research Group at the Helmholtz Centre for Information Security (CISPA). "This usually exceeds the victim's processing capacities and paralyses its website, for example, because regular requests can hardly be answered." This is akin to sending someone thousands of insignificant letters a day. That would overload the person concerned. Often, the attackers use hundreds of computers around the world for this, which they have infected with a malware program beforehand. Without the owners noticing anything, their computers are thus turned into weapons.
"Such attacks are carried out on websites and online shops, for example," explains the expert. This causes not only damage to the image, but sometimes also serious loss of sales. This can be intended by a competitor as well as by activists or secret services. "But some of these attacks are also used to blackmail people, companies or organizations," he adds. "Then the attacker only stops after receiving a ransom." Quite often, online gamers are also affected by such attacks, for example, when an envious competitor wants them out of the game for a while. "With these attacks, however, it is also possible to directly attack critical infrastructures," says Christian Rossow. "For example, if several power plants are connected for remote control, a mass attack could cause severe disruptions."
The IT expert and his team have therefore made it their task to find such mass attacks on the internet. "We make ourselves into a honeypot," he says with a grin. "That means we pose as an abusable middle system." If the attacker bites, he uses Rossow's globally distributed network of rented servers for his attacks and the IT expert is right in the middle of it. "If the attack is launched using our systems, we are of course faced with a dilemma. On the one hand, we don't want to attract attention, but on the other hand, we don't want to actively participate in the attack. That's why we only send a few data packets to avert damage." But his team has a front row seat and can document the attacks live. And they find tens of thousands of them every day.
"This way we can quickly let the victims know so they can take countermeasures," he explains, "and we can help identify the attackers." To do this, Christian Rossow and his team have developed a special fingerprint method. They give each attacker a personal fingerprint and can thus find out where their network is located. "We work together with the state criminal investigation offices and with Europol to track down such attackers," says the IT specialist. "In the past, the attacks were completely anonymous, which made prosecution almost impossible. Meanwhile, the FBI is also very interested in our services."
For the researchers at CISPA, this is an active and exciting field of research. "Our fingerprint often helps, butnot always," says Christian Rossow. "That's why we are constantly looking for new ways to identify the perpetrators of such mass attacks."
Awareness is the key word
Whether mass attacks or targeted hacks, ignorance, carelessness or guilelessness on the part of IT users are often a decisive factor in the success of the attack. Social engineering is what IT experts call the attempts to obtain passwords and access to systems. "Leaving a manipulated USB stick in the company car park is a classic that probably still works today," says Jörn Müller-Quade. And of course, it includes all the phishing emails that try to trap users and make them give up their passwords. "Social engineering is a huge problem because we humans don't always have the right assessment. And that's exactly why Professor Melanie Volkamer and her research group Security Usability Society (SECUSO) at KIT are also researching awareness." In the past, the person in front of the computer was often considered a weak point. But that, assures the IT expert, is an old-fashioned view. "Rather, you have to sensitize people," he says. "You have to make them recognize irregularities and use their common sense." Forcing people to behave like machines is the wrong approach, he says. Hardly anyone can remember cryptic passwords. Well-designed password managers and two-factor authentication are not only easy to use here, but also increase security. "User blaming, i.e., simply blaming everything on the user, is no longer done today," adds Jörn Müller Quade. "Because people have come to the conclusion that poor usability is a design flaw and not the user's fault."
But what if the system itself makes decisions that we can't really comprehend? With the advent of artificial intelligence (AI) algorithms, a whole new set of security questions suddenly arise. Hans-Martin Rieser at the Institute for AI Security of the German Aerospace Center (DLR), which was founded in 2019, is investigating what these questions are and how they can be answered.
One squiggle does not make a cat
"We do understand how a neural network is structured and how deep learning approaches work in principle," says the AI security expert, summing up the problem. "But the actual specialist function can no longer be understood. So we can't tell how an algorithm learns from the training data and whether the answer it always gives is actually plausible." A computer decision, however, whose origin is unclear and which cannot be trusted without reservation, harbors a security risk. This is especially true if the AI is to be used on critical infrastructure, such as the control of energy networks. Hans-Martin Rieser therefore wants to find out how trustworthy AI decisions are in the SKIAS project.
"We have completed the preparations and are now starting on the content," says the AI expert. "We have identified three important starting points for this. The first: An AI is only as good as the data it is trained with." An example: For one experiment, an AI was supposed to learn to distinguish dogs from cats in pictures. However, the human teachers manipulated the visual material with which they trained their AI. They painted a squiggle on a large part of the cat pictures. After the training was complete, they unleashed the AI on new visuals. "It’s likely that the AI picked out only that feature, because it classified every image with a squiggle as a cat."
This is a misjudgment which can have serious consequences for safety in other cases. Such errors should not happen if an AI is to generate the model of the unknown environment on the basis of image data, or perhaps even to steer a robot autonomously over Mars. "We want to use a drone to find out how an AI learns such things," says Hans-Martin Rieser. The SKIAS team has it fly in a controlled environment and scan its environment with a camera. "They also created a reference model of the environment. We then compare the model created by the drone's AI with this and hopefully find starting points for how reliably the system learns."
His second starting point for the SKIAS project is the fact that an AI will output a solution regardless of the quality of the data. "This is the basic function of a neural network," he explains. "Every input goes through the different layers and generates an output." But are the results that come out of an AI system actually always useful? "To improve this, we introduce physical rules into the AI system," Hans-Martin Rieser outlines his approach. In doing so, he basically provides the AI with the framework in which we also operate as humans, because even we cannot bend the laws of physics. "With this, we can then ensure within the framework of the laws of physics that the results of the AI are actually also plausible, and in this way increase safety."
And there is a third area of activity that the AI expert wants to tackle, among others, in the context of cybersecurity. "The data that an AI receives as input," he says, "can be error-prone." To do this, one must keep in mind how a neural network is fed data. For example, if the AI is to identify people on surveillance video, it receives an image in a common graphic format. "However, the conversion of the data from the camera sensor to the finished image can be error-prone," explains Hans-Martin Rieser. "This comes once from the image processing algorithms themselves, but can also be intentional by attackers." To confuse an AI, he explains further, an artificially added image noise, for example, would often suffice. "That's why we want to get as close as possible to the raw data with the AI," he continues. To do this, he feeds the AI with the raw signals directly from the camera, instead of first converting them into an image and then sending the image via–possibly vulnerable–lines.
"With SKIAS, we don't want to find out in detail how an AI arrives at a certain result," he summarizes. "After all, we are rarely interested in that with a human being. We simply want to know how far we can trust the result, whether from a human or an AI."
Jörn Müller-Quade also knows how important such research is already today: "As a member of the Helmholtz Association, we are not only concerned with today, but also with the well-being of society in the time horizon of ten to 15 years," he says. "If you think about the fact that critical infrastructures like the energy grids are becoming more and more intelligent, we already have to think about how to make them safe in all possible directions."
"It's difficult to build security in later." - Interview with Anne Koziolek, Professor of Software Engineering at the Karlsruhe Institute of Technology (KIT)