A faked version of Kyiv leader Klitschko fooled mayors across Europe—but it’s 
not clear this was really a technical ‘deepfake’

By David Meyer  June 27, 2022 
https://fortune.com/2022/06/27/fake-kyiv-klitschko-giffey-ludwig-martinez-almeida-karacsony-colau-deepfake-ai/


A few months ago, a “deepfake” video featured a bogus Volodymyr Zelenskyy 
appearing to urge the surrender of his fellow Ukrainians.

No-one was fooled, due to the ersatz Zelenskyy’s poor quality, but experts 
warned future deepfakes—“A.I.”-generated figures purporting to be real 
people—might not be so obvious.

Judging by what happened across at least four European capitals last week, the 
time to worry may have arrived.

Here’s what is certain: At least four European mayors had video calls with a 
faked representation of Vitali Klitschko, the mayor of Kyiv.

Investigations, involving police and state security officials, are now underway 
across the continent.

Vienna Mayor Michael Ludwig and “Klitschko” spoke on Wednesday, with Ludwig 
ending the call none the wiser—indeed, he was so convinced that he had really 
spoken with his Kyiv counterpart that he tweeted and issued a press release 
about it, including photos of the call taking place.

Berlin’s Franziska Giffey and Madrid’s José Luis Martínez-Almeida had their 
rounds with the bogus heavyweight champ on Friday.

For his Berlin call, “Klitschko” asked to speak in Russian with a German 
translator—odd, given that he lived in Hamburg for years during his boxing 
career, and speaks German fluently. Giffey’s spidey-sense was further triggered 
when he referred to Ukrainian refugees cheating the German benefits system and 
asked for help in getting male Ukrainian refugees sent back to serve in 
Ukraine, and in organizing a Christopher Street Day parade in Kyiv.

Giffey’s office, which went public about the incident later on Friday, said the 
call was terminated early, and the Ukrainians subsequently confirmed Berlin's 
mayor had not spoken with the real Klitschko.

In Madrid, Martínez-Almeida reportedly broke off his call after a few minutes, 
after he became suspicious he wasn’t really conversing with Klitschko.

“The city hall has filed a complaint with the police for an alleged crime of 
impersonation of the mayor of Kyiv in an interview via videoconference with the 
mayor,” Martínez-Almeida’s office told Fortune, adding: “The mayor has 
described as ‘absolutely intolerable’ that these events could take place at a 
time when Kiev is being besieged by the invasion of the Russian army.”

Meanwhile, Budapest mayor Gergely Karácsony said in a Saturday Facebook post 
that he too had “recently” been targeted, and had ended the call following 
“several strange, suspiciously provocative questions.”

The invitation to the Hungarian video call had come from a spoofed email 
address purporting to be that of the Kyiv mayor’s office. That was also true in 
the Spanish case—Bild published a screen-grab of that email.

Klitschko himself (the real one) on Saturday posted a video on Twitter saying 
the incidents needed urgent investigation, and advised anyone who needs to 
speak with him in German or English that he needs no translator.

So what exactly happened here?

Giffey was quick to say she had spoken with a deepfake, and described the 
technology as “a tool of modern warfare.” (Speaking to Berlin radio, she also 
said she had heard Barcelona mayor Ada Colau had been similarly 
targeted—Fortune has inquired with Colau’s office whether this is the case.)

Karácsony also said the fake Klitschko had been created by “professional 
deepfake technology.”

Some experts aren’t so sure.

The investigative journalist Daniel Laufer said in a Sunday Twitter thread that 
the published images of the call suggest a true deepfake—in which Klitschko’s 
face would have been at least partially generated by computer systems trained 
on real footage or imagery—was not involved.

That’s partly because the images of Klitschko’s supposed camera feed correspond 
exactly with frames from what is clearly the source material: a real interview 
that Klitschko conducted in April, which is available on YouTube.

Also, Klitschko keeps moving his head as he speaks, but there are no telltale 
artefacts around his head that would suggest an image-generating system trying 
to keep up—even though he is sitting in front of a visually complex background.

“If all five images look exactly the same as in the source material (with 
matching facial expressions and background): where does the A.I. manipulation 
lie?” Laufer tweeted.

He suggested the fake Klitschko might instead have been generated by precutting 
snippets from the original video and reassembling them “in real time,” with 
viewers chalking up any janky transitions to the nature of video calls.

Florian Gallwitz, a professor of computer science and media, concurred with the 
assessment that this was not a proper deepfake as the term is understood. “The 
main purpose of the deepfake drivel is probably to cover up how clumsy the 
tricks were for which you fell,” he tweeted.

Whatever the technology that was used, the result was clearly good enough to 
fool some of the people for some of the time, and that should be cause for 
concern.

The fake-Klitschko incidents come at a time when people—including those within 
the tech industry—are increasingly worried about the potential for new methods 
to deceive people about who and what they are seeing and hearing online.

That concern was on full display last week, when Amazon showed off a potential 
new Alexa feature in which the virtual assistant can mimic someone’s voice, 
after being trained on less than a minute of genuine audio.

The company not-at-all-creepily suggested the feature could be used to have a 
kid’s book read by the voice of a dead grandmother. Horrified observers from 
the tech community pointed out the technology was also ripe for abuse by 
scammers.

---
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to