Nikčević: There is no proof it is the real Medenica, but there is also no forensic evidence that would definitively close the case by proving it is all artificial intelligence
In lower-quality AI videos, errors can be noticed in facial expressions, lighting, movements, or synchronization between sound and image. However, with more advanced tools, especially when a combination of real footage and AI processing is used, those differences become very subtle, says Nikčević, adding that there is no simple test that can determine whether something is AI-generated or not

He first disappeared from house surveillance, and then appeared in a video on social media a day later. Despite a police statement, the question remains whether the person in the footage is really Miloš Medenica, convicted in Montenegro for organizing a criminal group, or whether it is a product of artificial intelligence.
- The most honest answer is that the public currently has no proof that it is the real Medenica, but there is also no publicly presented forensic evidence that would definitively close the case by proving it is all artificial intelligence - Snežana Nikčević from the Montenegrin NGO 35mm, which promotes universal democratic values, told Radio Free Europe (RFE/RL).
On January 28, Miloš Medenica was sentenced in a first-instance ruling to more than ten years in prison for organized crime, smuggling, and unlawful influence. His mother, Vesna Medenica, the former President of the Supreme Court, was also sentenced to ten years in prison.
While his mother had her passport confiscated and her travel ban extended after the verdict was announced, Medenica, who had been under a measure prohibiting him from leaving his apartment, was ordered into detention due to risk of flight. Instead of going into custody, Medenica became unavailable to the public at the end of January.
An Interpol warrant has since been issued for him, and his mother was remanded in custody, which the Court of Appeal confirmed on March 23 by rejecting an appeal against the detention decision.
Then, on February 1, the first video was published on the platform X. Shortly after its release, police stated that it was manipulative content containing „misleading information aimed at undermining the professional credibility and reputation of the management of the Ministry of Interior and the Police Directorate“.
- It is a video allegedly showing fugitive Miloš Medenica, which has been synthetically generated, i.e., created using artificial intelligence-based tools, with the aim of creating a false impression and misleading the public - they said.
In one of the first videos, the alleged Medenica says he will speak out every day until he is arrested or until it is disproven that he is a bot.
Nearly two months later, video posts continue, the person from the recordings appears on a television program, while in another program the police demonstrate how footage can be misused with the help of artificial intelligence - yet Miloš Medenica remains at large.
Snežana Nikčević, who is also an ambassador of the Ethical AI Alliance for the Western Balkans, says that an additional problem with videos is that they involve a person who is on the run, making the entire situation more sensitive and prone to manipulation.

She adds that without the original footage and a „serious forensic analysis“, there is no reliable way to determine whether it is „him or AI, no matter how convincing it appears“.
- And when this is combined with already seriously eroded trust in the security sector and constant mutual accusations within the system, it is entirely expected that citizens do not trust any version of the story - Nikčević adds.
Previous research by the Montenegrin Centre for Civic Education showed that 75.3 percent of respondents consider Montenegrin society to be corrupt. They cite the system (23.9 percent), politicians (23.6 percent), the judiciary, and the police (15.3 percent each) as the main culprits.
The possibility that a video we are watching does not represent a real person but rather so-called „deepfake“ material created with artificial intelligence tools only further deepens distrust between citizens and institutions.

Europol previously stated in a publication that at a time when distrust in institutions is growing, „deepfakes“ and manipulated footage can be used to negatively influence public opinion.
- The impact of such images and recordings should not be underestimated - Europol said.
They further explain that deepfake technology can produce material that convincingly shows people saying or doing things they never actually did.
- Their goal is to amplify existing conflicts and debates, undermine trust in state institutions, and provoke anger and emotions in general. The erosion of trust will likely make police work more difficult - Europol states.
The Ministry of Interior of Montenegro did not respond to RFE/RL’s inquiry on how much the need to verify whether videos are real complicates the search for Medenica.
„It all depends on the quality of manipulation“
Europol notes that although deepfake materials can produce highly convincing content, there are sometimes flaws visible upon close inspection. Examples include blurred areas around the face, lack of blinking, and inconsistencies in hair, veins, scars, and similar details.
Determining whether a video was created using artificial intelligence depends on the quality of manipulation, Nikčević explains.
- In lower-quality AI videos, errors can be noticed in facial expressions, lighting, movements, or synchronization between sound and image. But with more advanced tools, especially when a combination of real footage and AI processing is used, those differences become very subtle - says Nikčević, adding that there is no simple test to determine whether something is AI or not.
When it comes to legislation, the European Union has implemented the AI Act, a comprehensive law regulating artificial intelligence. Montenegro is close to becoming a member of the Union, meaning it will need to align its laws with the EU acquis.
Nikčević explains that on paper Montenegro is following the EU, working on an AI strategy and legal harmonization.
- In practice, things look quite different. For example, we still do not have a data protection law fully aligned with the GDPR (which is a prerequisite for our protection in the digital space within the EU regulatory framework). This case clearly exposes systemic shortcomings, especially in the security sector, lack of capacity, insufficient expertise in advanced technologies, and a rather fragmented institutional response - Nikčević explains.
She adds that implementation will also be a problem, partly due to the general level of digital literacy and partly due to the lack of qualified personnel.
- And generally non-transparent and insufficiently controlled use of AI technologies in practice - Nikčević said.
Last year’s survey by the U.S.-based Pew Research Center on artificial intelligence showed that far more people are concerned about AI than excited by it.
Europol warns that a decline in public trust in government and media is one of the side effects of using deepfake materials for disinformation.
- One of the most harmful aspects of deepfakes may not be the disinformation itself, but the principle that any information can be false - Europol states in its report „Malicious Uses and Abuses of Artificial Intelligence“.
In such a climate, more than a dozen videos featuring Miloš Medenica, who is on the run after a first-instance conviction for creating and leading a criminal organization involved in drug trafficking, cigarette smuggling, bribery, and illegal possession of weapons, have been published on social media.