Wed. Dec 25th, 2024
  • A student in St. Petersburg created a deepfake of Russian Federation President Vladimir Putin to converse with the real Putin, drawing attention to the dangers of artificial intelligence (AI) and neural networks.
  • The deepfake technology is improving rapidly and poses a significant threat as it can be deployed to manipulate people’s opinions or to impersonate high-profile individuals.
  • Experts suggest more stringent oversight and education are required to manage the related risks.

Russian President Vladimir Putin recently interacted with a deepfake version of himself, created by a student in St. Petersburg. This incident has again ignited discussions on the potential dangers of such artificial intelligence-driven technology.

Putin’s AI-generated doppelganger seemed to surprise him initially. He acknowledged the similarity but emphasized that only he can speak with his voice. Despite calling the deepfake his first twin, this isn’t Putin’s first encounter with this technology. In June, a deepfake of Putin on television claimed that martial law was declared due to an invasion from Ukraine’s army, all part of a hack.

Jason Mollica, a lecturer in the School of Communication at American University, highlighted the significant threat this technology can pose. AI-based technologies have their benefits but the risks they present are considerable.

Experts have pointed out how deepfake technology is continually improving, making it increasingly difficult to authenticate videos featuring political leaders and high-profile figures. Alongside this, there is the extreme danger of a high-ranking executive or political leader being convincingly replaced by a digital clone, which could conceal a significant change in leadership or shift in power.

Rob Enderle of the Enderle Group suggests that this technology could validate conspiracy theories, allowing an entity possessing it to gain substantial power unbeknownst to those tasked with preventing such moves. Echoing these sentiments, Mollica highlighted the risk of disinformation spread via deepfake content across social networks.

Although some states have implemented legislations to curtail malicious deepfakes, it is doubtful that such measures will effectively prevent their creation. Mollica emphasized the need for stringent oversight and further education on utilizing AI responsibly.

Moreover, the proliferation of deepfakes could potentially serve as a defense for politicians, celebrities, or CEOs caught in compromising situations, which could further muddy the waters of truth and disinformation.

Related Post