The AI tool that predicts violence

Hands writing on a laptop. 

The tool consists of several algorithms, which together can assess the risk that an individual will commit acts of violence. All you need are texts from the internet. Photo: Getty Images

“We are motivated by the possibility of solving a problem that is really serious,” says Nazar Akrami, a professor of psychology at Uppsala University. Working with information technology researchers, he has developed a tool that analyses texts on the internet to assess the risk that a person might commit a violent act.

The Dechefr tool has generated a great deal of interest. Akrami and his colleagues have been working with the Federal Bureau of Investigation (FBI) in the United States. Others who have shown interest in the tool include Google, LinkedIn and law enforcement authorities in various countries.

It all started a few years ago when Akrami received an email from Lisa Kaati, an IT researcher at Uppsala University, who wanted to meet and talk. He had been researching and teaching personality and social psychology himself since 2005.

“Lisa had a concrete problem. She worked with law enforcement and national defence forces, researching how to spot lone perpetrators of violence on the internet. Our joint research focused on how to identify individuals at an elevated risk of committing violent acts based on their written communication.”

Porträtt Nazar Akrami

Nazar Akrami, a professor of psychology at Uppsala University. Photo: Mikael Wallerstedt

Formed a company

When the research project ended, Nazar Akrami did not want the findings to end up in a drawer, so he decided to form a company to continue developing and refining the technologies. Today he runs the company Mind Intelligence Lab.

At its core is Dechefr – a tool with several algorithms, which together can assess the risk that an individual may commit acts of violence. All you need is a text from the internet, such as from social media, discussion forums, text messages and chats.

“All kinds of texts can be entered, and the tool then produces a risk rating of from 0 to 100, with a green, yellow or red code. The great majority turn out to be green. One problem with risk assessments is that some cases are difficult to assess, so everything hinges on reducing the false positives.”

Partnering with the FBI

Working with the FBI, he has been further refining the tool and has increased its accuracy has increased by training the algorithms, using data from known cases and from a normal population. This in turn has led to contacts with companies like Google and LinkedIn and with law enforcement authorities in various countries.

“In our two years of operation, we have found that our algorithms and infrastructure can be used for other purposes. For example, we work on assessing suicide risk by analysing texts.”

The company’s products are in various stages of development. The Suicidescan tool analyses suicide risk, and a module analysing personality is also under development.

Based on written texts

All the tools use texts found on the internet, where people commonly express negative thoughts and feelings.

“There are two worlds, the real world and the virtual world. In the real world, there are lots of mechanisms that make meetings pleasant, but these are missing in many places online,” says Nazar Akrami. “Our company develops technologies that enable us to predict various behaviours – especially undesirable ones. For example, we are currently working on ways to identify grooming behaviour by sexual predators, which is important in helping children stay safe online.”

Up until now, the risk assessment tools have been used by analysts, who make their own assessment of the warning flags identified by the tool. Although research shows that the accuracy is high, around 95 per cent, the analyst always has the last word and makes the decisions.

Help in making the right decision

The aim of the tools is to help analysts make the right decisions. Sometimes, for example, situations can arise where people who pose no risk at all are red-flagged.

“Making risk assessments is complicated, and flagging innocent individuals or missing potentially dangerous ones is always a problem. Analysts have a complicated task, and I hope our tools will help them in their assessments.”

Of course, this raises difficult questions about privacy and ethics, which have been discussed within the company a great deal.

“Our research has undergone ethical review, and we do not allow just anyone to use the tools. This is carefully regulated in agreements with analysts, companies and social media, for example.”

Annica Hulth

Lone perpetrators of violence

  • “Lone perpetrator” is the term used to describe a malefactor who, alone or with another person, plans and commits serious violent acts without outside help and without personal gain.
  • In Sweden and its Nordic neighbours, there have been several high-profile attacks by lone perpetrators, such as terrorist attacks and school shootings. The most well-known case is Anders Behring Breivik’s attack in Norway in 2011.

Subscribe to the Uppsala University newsletter

FOLLOW UPPSALA UNIVERSITY ON

facebook
instagram
twitter
youtube
linkedin