Radboud and TUE help improve AI decision-making

Nieke Roos
Leestijd: 2 minuten

Researchers from Radboud University Nijmegen and Eindhoven University of Technology, together with colleagues from the University of Austin and the University of California, Berkeley, have come up with a method that might help artificial intelligence find safer options faster. They’ve devised a new way to reason about uncertainty, which can be applied, for example, to the many unknown circumstances in which a self-driving car will find itself during its ride. To validate the AI, extensive calculations are run to analyze how it would approach various situations. With the new method, to be published in AAAI, this modeling can become far more realistic and thus allow for better, safer and quicker decision-making.

The researchers have defined a new approach to so-called “uncertain partially observable Markov decision processes,” or uPOMDPs. These are models of the real world that estimate the probability of events. POMDPs are already used to simulate and model many situations. They can help to predict the spread of an epidemic, calculate how planes and spacecraft can avoid collisions and even survey and protect endangered species. “We know that these models are very good at providing a realistic capture of the real world. However, the enormous processing power needed to use them means their use in practical applications is often still limited,” says Radboud scientist Nils Jansen. “This new approach allows us to take all our calculations and theoretical information and use it in the real world on a more consistent, regular basis.”

The team arrived at their breakthrough through explicitly including the uncertainty of the real world into the models. Jansen: “For example, current models might just tell you that there’s an 80 percent chance that a ride in a self-driving car will be fully safe. It’s unclear what might happen in the other 20 percent and what type of risk can be expected. That’s a vague approximation of risk. With our new approach, a system could give far more detailed explanations of what could happen and take those into account when making calculations. For users, this means they have more specific examples of what could go wrong and make better and more adequate adjustments to avoid those specific risks.”

This article is exclusively available to premium members of Bits&Chips. Already a premium member? Please log in. Not yet a premium member? Become one and enjoy all the benefits.


Related content