A
A
A

Цвет сайта

A
A
Обычная версия
Главная - ФАКТЧЕКИ & БЛОГПОСТЫ - Is it possible to detect lies with the help of artificial intelligence?

Is it possible to detect lies with the help of artificial intelligence?

On November 28, 2023 on the website LiarLiar.ai a presentation article entitled “Revealing the truth using artificial intelligence: a closer look at LiarLiar.ai“. The authors of this publication claim that advanced algorithms of the new artificial intelligence are able to surpass the traditional polygraph and recognize lies from video. Our team became interested in the problem of using applied AI (artificial intelligence) technologies in the system of determining the degree of truthfulness of human statements.

We have analyzed articles on this topic published in foreign and Russian media in order to highlight the advantages and disadvantages of using both the technology indicated in the article under study and other approaches to this issue. During our research, we also examined the following aspects of the problem: is it possible to use LiarLiar.ai to determine lies with high accuracy, how legitimate is its widespread use and in which areas of life it can be used.

Advantages and disadvantages of artificial intelligence in lie detection

The article “Artificial intelligence-based lie detection algorithms: the final solution to the problem of deception?” by Marcin Frontskevich, published on October 5, 2023 on the TS2 website, describes the advantages of artificial intelligence-based lie detection algorithms: the ability to analyze huge amounts of data in a short period of time, speed and efficiency.

The author draws attention to critics who claim that these algorithms can be biased and discriminate against certain people. Despite these objections, Marcin Frontskevich concludes that the improvement of artificial intelligence and its subsequent evolution can lead to a definitive solution to the problem of detecting deception.

We turned to an expert, Director of the Institute of Information Technology, Mathematics and Mechanics of the National University of Economics, Doctor of Physical and Mathematical Sciences Nikolai Yuryevich Zolotykh, to find out whether the algorithm developed on the basis of existing achievements in the field of computer vision works. He noted that the possibility of using artificial intelligence technologies in this area is scientifically justified. However, in any case, the correctness of artificial intelligence detection of signs of a false statement is strongly influenced by the quality of the video call, lighting and video compression.

An artificial intelligence tool captures changes using remote photoplethysmography, a technology that observes changes in complexion correlated with heart rate, and a change in heart rate provides a lot of information about whether a person is lying or telling the truth. However, low video resolution can significantly affect the accuracy of the neural network. Obviously, there must be appropriate requirements for the quality of the camera, — the expert noted.

Dmitry Bulgakov in his article “Without deception: we learned to recognize lies with the help of IT technologies” dated March 26, 2023, published on the website iz.ru , examines the problems of detecting lies in a technical way based on the comments of Russian scientists.

The author interviews the creators of the program “Emoradar”, designed to help pro-lifer experts (specialists in the analysis of human behavior). Dmitry Bulgakov quotes a comment by the head of the MassEffect Agency for Strategic Communications and profiler-verifier Natalia Dybina, who considers the main reason for the inefficiency of AI to be individual internal reasons for a person to hide a particular reaction. According to her, the machines will not be able to recognize a lie in this case.

The author of the article in his conclusion quotes the words of an expert about the impossibility of replacing the machine method of profilers, since the emotional sphere of a person is too complex.

It is known that artificial intelligence algorithms are based on the analysis of general psychological changes in a person’s body and facial expressions when he tells a lie. One of the main problems is that the neural network does not take into account the individual characteristics of each person. So, the mechanism of artificial intelligence LiarLiar.ai It is aimed at reading physiological indicators that change when a person lies. At the same time, the tool does not take into account the cases of those people who are called “pathological liars”. We turned to an expert – associate professor of the Department of Social Security and Humanitarian Technologies, Candidate of Psychological Sciences, head of the Laboratory of Cyberpsychology Valeria Alekseevna Demareva with a request to predict the behavior of AI if a liar is confident in his words.

In fact, this is the biggest difficulty: if a person is sincerely sure that this was the case, then such information cannot be reliably detected. It is emotionally clear that he has so convinced himself that the reaction will be absolutely identical to that which is characteristic when presenting true information, and most likely there will be no cognitive effort to somehow construct a fictional situation. Therefore, it is more impossible than possible to expose a pathological liar, — she believes.

The effectiveness of artificial intelligence in comparison with a polygraph

The article “Deceive me. Can neural networks recognize human lies?” was published on the RBC Trends website on September 7, 2021 with comments from expert Maria Churikova, a leading researcher at Neurodata Lab.

The article presents the differences between an AI that recognizes deception and a lie detector. The expert concludes that both mechanisms measure only different physiological parameters and cannot independently determine whether a person has lied. They are an auxiliary tool for a specialist.

The impossibility of artificial intelligence algorithms to fully provide an accurate assessment of the truthfulness or falsity of a particular statement draws the attention of our expert – the General director of Quantum Inc., IT specialist Yuri Viktorovich Batashev. According to him, artificial intelligence can be useful only in combination with the work of a psychologist-translator.

AI can rather work as such a simplified polygraph. That is, he accurately, mathematically, quantitatively measures some specific features and based on them gives some kind of integral assessment. But this assessment may indicate, like a polygraph assessment, that a person is experiencing discomfort or tension while uttering these words, but this does not directly equate to the fact that he is lying. As in the case of a polygraph, a huge part of the work is the professionalism of a psychologist-translator who looks at the objective readings of this device, — says the expert.


Psychological and ethical aspects of neural networks

In the article, “Lie detectors have always aroused suspicion. Artificial intelligence has only made the problem worse.” An article by Jake Bittle dated March 13, 2020, published on the MIT Technology Review website, examines attempts to recognize deception based on artificial intelligence.

The author cites as an example the stories of people whom the polygraph found guilty, although they were not. Jake Bittle concludes that the applied artificial intelligence technologies developed to detect lies can exacerbate the problem of punishing innocent people and disqualify worthy applicants for a job or loan.

An editorial titled “Artificial Intelligence Lie Detector: Advanced Lie Detection Techniques” was published on the ThinklML website on August 24, 2022.

In it, the authors argue that AI can be useful when working with social networks, as well as to ensure the safety of people in crowded places, such as airports. They conclude that the neural network for determining the truth is a great technical achievement.

Legal aspects of the use of artificial intelligence

We turned to an expert, Associate Professor, Candidate of Law, Dean of the Faculty of Law of the National University of Economics. Lobachevsky, Evgenia Evgenievna Chernykh, with a request to comment on the legal possibilities of using AI in various fields. According to her, LiarLiar.ai legally, this can be used even without the knowledge of another person. However, the data obtained using algorithms cannot be perceived as significant, and therefore their use in the law enforcement system is impossible.

From the point of view of the law, it is legal. There is indeed an article in the Criminal Code on violation of privacy (article 137 of the Criminal Code. Violation of confidentiality), and the subject of regulation of this process will be the information received, but in this case there is no information that could contain any secret. Therefore, there can be no responsibility here. But we must not forget that we have no right to accuse a person of lying if we cannot prove it. The algorithm, like the polygraph, is imperfect, and they cannot be considered in court as basic evidence, — said expert Evgenia Chernykh.

Conclusion

Thus, the lie detection algorithm based on artificial intelligence technologies has a number of advantages, the main of which are the speed and efficiency of obtaining a result in the analysis system of an impressive amount of data. Nevertheless, the capabilities of artificial intelligence do not allow it to correctly perform an in-depth analysis of human emotional manifestations related to individual characteristics. In particular, there is no doubt that in the case of pathological liars, the results of lie detection by a neural network will be unreliable.

Authors: Anna Sadovina, Alina Kalashnikova, Darya Nazarova, Alyona Manina, Anastasia Zaharova.
Cover: Alina Kalashnikova

Все новости