Deepfakes

horsechoker

The Caf's Roy Keane.
Joined
Apr 16, 2015
Messages
51,768
Location
The stable
I think for regular people you need to limit the amount of photos and videos of you online in order to avoid this kind of thing. Nothing actors can do though other than have people at the ready to prove the videos are fake.

What if this affects football and we start seeing fake goals?
 

Amar__

Geriatric lover and empath
Joined
Sep 2, 2010
Messages
24,075
Location
Sarajevo
Supports
MK Dons
There's a valid reason to think that this program is made by Olly.
 

peridigm

Full Member
Joined
Dec 3, 2011
Messages
13,836
Reddit suspended the deepfakes subs due to the nsfw clips.
 

DWelbz19

Correctly predicted Portugal to win Euro 2016
Joined
Oct 31, 2012
Messages
33,951
If anything that increases its notoriety and manifests itself on seedier sites.
 

Maagge

enjoys sex, doesn't enjoy women not into ONS
Joined
Oct 9, 2011
Messages
11,939
Location
Denmark
Sam Harris has been talking about software like this for a while. Once it’s perfected we’re into a whole new world of Fake News.
A colleague of mine were doing some programming while travelling between Stanford uni and San Francisco. The guy next to him asked him whether he was doing statistics or something. They got chatting and this bloke was working for a Silicon Valley startup who were trying to develop software which could make completely photorealistic pictures from nothing. My colleague naturally asked whether that wasn't a dangerous endeavour what with all the fake news. They hadn't thought about that apparently.
 

Solius

¯\_(ツ)_/¯
Staff
Joined
Dec 31, 2007
Messages
86,344
Very scary to think what people will be able to claim as fake in the future. And scary to think about what kind of things will be faked in the future. The era of misinformation is about to get worse.
 

Pexbo

Winner of the 'I'm not reading that' medal.
Joined
Jun 2, 2009
Messages
68,532
Location
Brizzle
Supports
Big Days
Very scary to think what people will be able to claim as fake in the future. And scary to think about what kind of things will be faked in the future. The era of misinformation is about to get worse.
It is going to need strict regulation on social media’s responsibility to regulate the videos it allows shared on their platforms. I can’t imagine it would be impossible to create AI to analyse the videos and look for key signs they have been generated or doctored. It should be the platforms responsibility to either remove or flag as fake.
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,535
Location
London
Very scary to think what people will be able to claim as fake in the future. And scary to think about what kind of things will be faked in the future. The era of misinformation is about to get worse.
At the moment, it is relatively easy to know if something is fake or not. Problem is, it can become virtually impossible to know it soon enough *.

* Actually working on similar stuff, but for different purposes.
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,535
Location
London
We’ve already entered the era of unreality and it’s going to get worse.

@Revan

Would there be no way of having software that could determine what is or isn’t a deepfake?
Hard to say. The current methods are still quite primitive and so relatively easy to be detected.

There is a type of neural networks called Generative Adversarial Networks (GANs) introduced in 2014, initially for image generation, but in the last few years they have been used for many other things (still mostly image manipulation). In layman's terms, those networks generate fake images that look quite real and have a very similar distribution to real images. Nowadays, you can generate beautiful high resolution images which are hard to be discriminated from real ones. They still don't work as good in manipulation as in just generating images, and still don't work as good in videos, but we are getting there. The problem is that when GANs become reliable to do DeepFake videos, there is feck all you can do to discriminate those videos from real ones (again in layman's terms, GANs are actually 2 networks competing against each other, where one tries to generate stuff that looks real, while the other tries to discriminate between fake and real. If the generator wins, then there is no way on knowing that those images are fake, cause the generator is so good at fooling the discriminator. Again, this is simplified, but hopefully you got the idea).

Anyway, we are probably a few years from it happening, and many people are working on this, so someone might find new ways of being able to detect fake from real. It is both exciting and scary.
 

Classical Mechanic

Full Member
Joined
Aug 25, 2014
Messages
35,216
Location
xG Zombie Nation
Hard to say. The current methods are still quite primitive and so relatively easy to be detected.

There is a type of neural networks called Generative Adversarial Networks (GANs) introduced in 2014, initially for image generation, but in the last few years they have been used for many other things (still mostly image manipulation). In layman's terms, those networks generate fake images that look quite real and have a very similar distribution to real images. Nowadays, you can generate beautiful high resolution images which are hard to be discriminated from real ones. They still don't work as good in manipulation as in just generating images, and still don't work as good in videos, but we are getting there. The problem is that when GANs become reliable to do DeepFake videos, there is feck all you can do to discriminate those videos from real ones (again in layman's terms, GANs are actually 2 networks competing against each other, where one tries to generate stuff that looks real, while the other tries to discriminate between fake and real. If the generator wins, then there is no way on knowing that those images are fake, cause the generator is so good at fooling the discriminator. Again, this is simplified, but hopefully you got the idea).

Anyway, we are probably a few years from it happening, and many people are working on this, so someone might find new ways of being able to detect fake from real. It is both exciting and scary.
Cheers. Fascinating stuff. So it will be like a game of cat and mouse.
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,535
Location
London
Cheers. Fascinating stuff. So it will be like a game of cat and mouse.
Kind of. The generator actually never sees the real data, it is just trained to do well vs discriminator (who in turn is trained to defeat the generator). So when it works, it really works.

However, we are still not there. They are really difficult to train, they do stuff you are not asking them to do (or well, you think you are not asking them to do), and for almost every problem, there are better solutions. However, they are very general purpose, and might have unlimited potential.

I heard once a talk from a guy who is one of the world leading researchers in them (from Berkeley university) and he was like 'I don't really care about GANs in DeepFakes, cause they don't work well. And if we somehow manage to train the generator well, then there is nothing we can do about anti-DeepFake so better to not worry'. A pretty grim saying (and well, a bit controversial for the sake of being thought provoking), but probably not far from the truth.
 

Arruda

Love is in the air, everywhere I look around
Joined
Apr 8, 2009
Messages
12,584
Location
Azores
Supports
Porto
I think at worst, depending on how the technology evolves, videos will quicky lose their "evidence" value in the minds of people.

Sure, someone can fake a perfect video of Sanders giving a speech in one of those weird paedophile conventions, but 10 seconds later someone will have replaced his head with Trump's and it will have gone viral too. Even the most dimwitted will realize it's worthless, because everyone watches videos online. Soon people will get bored with it, and only the most creative and funny will get some attention.

I don't see much harm coming out of this, there are far bigger concerns on how social media can influence society. Some areas like forensics and security might have to update some features for digital security, but I think the impact will not go further than that.
 

stepic

Full Member
Joined
Aug 31, 2006
Messages
8,667
Location
London
I think at worst, depending on how the technology evolves, videos will quicky lose their "evidence" value in the minds of people.

Sure, someone can fake a perfect video of Sanders giving a speech in one of those weird paedophile conventions, but 10 seconds later someone will have replaced his head with Trump's and it will have gone viral too. Even the most dimwitted will realize it's worthless, because everyone watches videos online. Soon people will get bored with it, and only the most creative and funny will get some attention.

I don't see much harm coming out of this, there are far bigger concerns on how social media can influence society. Some areas like forensics and security might have to update some features for digital security, but I think the impact will not go further than that.
for me the abuse is ripe for targetting eg pensioners who only subscribe to certain channels. there's loads of right wing BS facebook groups that could just send through the deep fake videos they want to. it's alright if you're savvy enough but judging by the BS my mum posts on FB i'm sure there's a lot that aren't.
 

Eboue

nasty little twerp with crazy bitter-man opinions
Joined
Jun 6, 2011
Messages
61,106
Location
I'm typing this with my Glock 19 two feet from me
for me the abuse is ripe for targetting eg pensioners who only subscribe to certain channels. there's loads of right wing BS facebook groups that could just send through the deep fake videos they want to. it's alright if you're savvy enough but judging by the BS my mum posts on FB i'm sure there's a lot that aren't.
the solution is to not let olds vote or handle money
 

Arruda

Love is in the air, everywhere I look around
Joined
Apr 8, 2009
Messages
12,584
Location
Azores
Supports
Porto
for me the abuse is ripe for targetting eg pensioners who only subscribe to certain channels. there's loads of right wing BS facebook groups that could just send through the deep fake videos they want to. it's alright if you're savvy enough but judging by the BS my mum posts on FB i'm sure there's a lot that aren't.
I'm not sure I agree, though obviously this is all conjecture. I know a lot of people like you describe, but even someone like your mother probably knows photoshoping exists and would recognize a picture of you on Mars as fake. I just think "videos can be fake" will become very common knowledge, except for a very marginal segment of nutcases/very low IQ, and most of these were beyond salvation anyway.
 

stepic

Full Member
Joined
Aug 31, 2006
Messages
8,667
Location
London
I'm not sure I agree, though obviously this is all conjecture. I know a lot of people like you describe, but even someone like your mother probably knows photoshoping exists and would recognize a picture of you on Mars as fake. I just think "videos can be fake" will become very common knowledge, except for a very marginal segment of nutcases/very low IQ, and most of these were beyond salvation anyway.
the danger isn't necessarily obviously overt stuff like me on Mars, but the (potential for) more subtle, pernicious stuff.
 

Mogget

Full Member
Joined
Nov 27, 2013
Messages
6,536
Supports
Arsenal
I'm not sure I agree, though obviously this is all conjecture. I know a lot of people like you describe, but even someone like your mother probably knows photoshoping exists and would recognize a picture of you on Mars as fake. I just think "videos can be fake" will become very common knowledge, except for a very marginal segment of nutcases/very low IQ, and most of these were beyond salvation anyway.
What about the effect that has on real videos then? Say a genuine video is released of a politician in a compromising situation. He can easily just pass it off as fake and if people already distrust videos who will question it?
 

RK

Full Member
Joined
May 23, 2008
Messages
16,102
Location
Attacking Midfield
I think at worst, depending on how the technology evolves, videos will quicky lose their "evidence" value in the minds of people.

Sure, someone can fake a perfect video of Sanders giving a speech in one of those weird paedophile conventions, but 10 seconds later someone will have replaced his head with Trump's and it will have gone viral too. Even the most dimwitted will realize it's worthless, because everyone watches videos online. Soon people will get bored with it, and only the most creative and funny will get some attention.

I don't see much harm coming out of this, there are far bigger concerns on how social media can influence society. Some areas like forensics and security might have to update some features for digital security, but I think the impact will not go further than that.
I'm on this train of thought. Been thinking about it for a while with this kind of technology being inevitable.

I think part of the fear comes from the delicate social transition we're in at the moment - unprecedented access to unprecedented amounts of information, with the proportion of fake information high enough to be causing a problem but low enough to be preventing a strong level of healthy scepticism. I see the balance moving in a positive direction eventually - once the reality of fakery becomes everyday, common and understood, we'll treat it with scepticism by default.

We'll still have some people choosing exactly what they want to believe, which isn't going to change in the next few centuries. But at least reasonable analysis of what's real and what isn't should become the norm.

And when my nudes finally get leaked, I can claim it's just the AI that has a tiny knob.
 

horsechoker

The Caf's Roy Keane.
Joined
Apr 16, 2015
Messages
51,768
Location
The stable
Recently a channel has been uploading speech synthesises to YouTube. I see this being related to deepfakes in the sense that you can use somebody's voice to say whatever you want it to say. Although there are some funny ones like:


You can essentially create a song with their voice which would lead to all sorts of problems with being able to make music without the artist being there.


Moreover, couple these with deepfakes and you can make somebody say whatever you want. The truth becomes much murkier everyday
 

Pexbo

Winner of the 'I'm not reading that' medal.
Joined
Jun 2, 2009
Messages
68,532
Location
Brizzle
Supports
Big Days
Recently a channel has been uploading speech synthesises to YouTube. I see this being related to deepfakes in the sense that you can use somebody's voice to say whatever you want it to say. Although there are some funny ones like:


You can essentially create a song with their voice which would lead to all sorts of problems with being able to make music without the artist being there.


Moreover, couple these with deepfakes and you can make somebody say whatever you want. The truth becomes much murkier everyday
If you want to hear something genuinely terrifying, here's a recurrent neural network learning to talk:

 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,535
Location
London
If you want to hear something genuinely terrifying, here's a recurrent neural network learning to talk:

Nice. It is quite interesting why the progress on speech/audio has lagged far behind the progress on language, which in turn has lagged behind the progress on vision. The basic techniques are quite similar to each other.
 

shamans

Thinks you can get an STD from flirting.
Joined
Oct 25, 2010
Messages
18,226
Location
Constantly at the STD clinic.
Nice. It is quite interesting why the progress on speech/audio has lagged far behind the progress on language, which in turn has lagged behind the progress on vision. The basic techniques are quite similar to each other.
How do you mean lagged behind? Language processing is still the most difficult thing out there. Speech and voice recognition has come a long way. Vision and speach are easier to deciper than NLP.
 

Pexbo

Winner of the 'I'm not reading that' medal.
Joined
Jun 2, 2009
Messages
68,532
Location
Brizzle
Supports
Big Days
How do you mean lagged behind? Language processing is still the most difficult thing out there. Speech and voice recognition has come a long way. Vision and speach are easier to deciper than NLP.
I think he’s referring synthesising language rather than interpreting it.
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,535
Location
London
How do you mean lagged behind? Language processing is still the most difficult thing out there. Speech and voice recognition has come a long way. Vision and speach are easier to deciper than NLP.
You’re right. Lapsus from my part, language has lagged behind speech which in turn has lagged behind vision.

As interestingly to me is that despite that the basic tools are the same (backprop and SGD) at the end, speech and NLP needed quite different tools to succeed (attention and Wavenet). Also, I am a bit surprised why the main tool in image generation (and now manipulating, including DeepFakes) has been from very unsuccessful to totally useless when it comes to NLP and speech (talking about GANs).

At the end, probably not surprising in hindsight, all three problems seem to be fundamentally different from each other and seem to require very different solution. Still everything is NN based though, which is nice.
 

Amarsdd

Full Member
Joined
Jun 7, 2013
Messages
3,299
You’re right. Lapsus from my part, language has lagged behind speech which in turn has lagged behind vision.

As interestingly to me is that despite that the basic tools are the same (backprop and SGD) at the end, speech and NLP needed quite different tools to succeed (attention and Wavenet). Also, I am a bit surprised why the main tool in image generation (and now manipulating, including DeepFakes) has been from very unsuccessful to totally useless when it comes to NLP and speech (talking about GANs).

At the end, probably not surprising in hindsight, all three problems seem to be fundamentally different from each other and seem to require very different solution. Still everything is NN based though, which is nice.
That's true, they are very different tasks esp. image generation from NLP and speech generation. Generating images is just about modelling distributions usually with hard boundaries between them but for generating NLP and speech generation you have to model those distributions (usually lot more than image) with added temporal/sequential dependencies. I don't think we have yet figured out that perfect architecture and algorithm to deal with such sequential dependencies like we have conv networks for images. Add to that the importance of pre-processing in NLP and speech makes it even more difficult.
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,535
Location
London
That's true, they are very different tasks esp. image generation from NLP and speech generation. Generating images is just about modelling distributions usually with hard boundaries between them but for generating NLP and speech generation you have to model those distributions (usually lot more than image) with added temporal/sequential dependencies. I don't think we have yet figured out that perfect architecture and algorithm to deal with such sequential dependencies like we have conv networks for images. Add to that the importance of pre-processing in NLP and speech makes it even more difficult.
Yep. Thing is, in images there is temporal dependency too (for example, videos), but the solutions there have been relatively straightforward. I would say that tracking is on a better place than anything NLP/Speech.

Temporal dependencies in vision seem to be much easier to model than when it comes to the two other fields IMO. In fact, often you do not need to make them explicit at all, just treat each frame like an image (with maybe some trajectory smoothing) and works relatively well.
 

Amarsdd

Full Member
Joined
Jun 7, 2013
Messages
3,299
Yep. Thing is, in images there is temporal dependency too (for example, videos), but the solutions there have been relatively straightforward. I would say that tracking is on a better place than anything NLP/Speech.

Temporal dependencies in vision seem to be much easier to model than when it comes to the two other fields IMO. In fact, often you do not need to make them explicit at all, just treat each frame like an image (with maybe some trajectory smoothing) and works relatively well.
I agree with you in terms of tracking in videos but that is still a lot easier task than video generation. I'm not really up-to-date in terms of video generation, but just googling recent works on that, it seems it is not even up to NLP or Speech generation.
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,535
Location
London
I agree with you in terms of tracking in videos but that is still a lot easier task than video generation. I'm not really up-to-date in terms of video generation, but just googling recent works on that, it seems it is not even up to NLP or Speech generation.
It has come a long way in the last couple of years. For example:

Vid2Vid: https://arxiv.org/abs/1808.06601

Live Face De-Identification: https://research.fb.com/wp-content/uploads/2019/10/Live-Face-De-Identification-in-Video.pdf

I actually co-authored this year a paper (very similar in spirit to Live Face De-Identification) that got into CVPR (top vision venue, and the venue with the highest impact in entire CS), that is more focused on images, but we were able to get temporal consistency for free (by simply smoothing the trajectories of each frame).

I agree that video generation has still a long way to go, and at the moment, Photoshop is better at generating DeepFakes than AI approaches, but there has been tremendous progress recently.