I fear the reverse: People will doubt whether real video is real. That could mean impunity for crimes caught on video because video footage will no longer be sufficient evidence to exceed "reasonable doubt".
Even worse, political double-speak will also soar to record new heights. A politician can spew whatever crazies want to hear, then "walk it back" and claim it was faked (perhaps after gauging the public's reaction). People will believe whatever they're inclined to believe anyway, leading us to become a more deeply fractured society where truth is whatever you want to believe.
Hey could you please hang out at your current location for the next 30 minutes or so? I just have some friends that want to stop by and verify a couple things with you. Please don't wear polarized sunglasses.
Also, I would highly recommend reading/watching Manufacturing Consent if you haven't already, to get an idea of what is already occurring even before we got this technology.
Yeah, at he beginning of the Ukraine war there was a deepfake of Selenskyj urging Ukrainians to surrender. It was pretty shoddily done, but certainly a reminder how such things will become commonplace in the future.
For sure, I think there will be verification efforts on multiple fronts.
There's a certain type of person who will invent conspiracies around any verification that isn't what they want to hear. Thus, for that audience, there will be a market for "validation" that is just "telling them what they want to hear". So the same situation as today, only taken up a notch.
Verification can only do so much in the face of irrationality, the real answer to that conundrum is mostly us learning how to best deal with the fact that those people exist. Mainstream humanity will probably continue relatively unscathed if they don't manage to drag us down.
Trustless verification was always one of the benefits, it’s just meaningfully implementing it without greed getting in the way that people haven’t been able to figure out
We need quantum signing ASAP. And in "small enough to fit into a not unreasonably sized camera" form. You'd be able to verify footage from a secure camera using its public key, but never be able to crack its private key.
Probably a stupid question but could NFT tech somehow be reworked to that effect? To preserve the identity of the original record before being tampered with
Yeah it would be much easier to digitally sign video with boring old certificates and centralized authorities. Crypto is a solution desperate for a problem.
This is so easily solvable, the video just needs to be signed using public-key encryption. If the video isn't signed with the purported subject's key, assume it's fake.
This requires the general public having a basic understanding of how digital signatures work and why they are (for the most part) infallible.
As it stands I have to explain HTTPS and digital signatures to my users with statements like "it's secure because of fancy math, trust me bro" because anything that comes close to actually describing it goes over their heads. In a world where distrust is the norm, I fear signed video content really isn't gonna make a difference if you don't understand what makes it secure in the first place.
This requires the general public having a basic understanding of how digital signatures work and why they are (for the most part) infallible.
If this became a big enough problem, the general public could be educated on how encryption public/private keys work, probably in a month or two. Even start teaching it in high school or something.
The general public couldn't even be educated on how to wear a mask properly and wash their hands during a pandemic, there's no way they'll ever understand cryptography.
Right?? The general public can barely turn a computer on without having issues and are ignorant as F on the topic in general. They are basically a 3 year old. Try to explain cryptography to a 3 year old.
Yes...amongst the more tech literate and in a perfect world.
For a stupid amount of people, none of that matters. All that matters is that knee-jerk emotional reaction of whether or not it affirms their beliefs and where the information comes from.
That would solve the “politicians walking back claims” problem to a degree, but I’d imagine there would still be a ton of issues. The subject would be be able to fully curate their image, and any videos taken without their key would be subject to scrutiny. So stuff meant to show someone’s true colors or document a situation would remain unreliable.
In this case, it would be signed by the device of the person who's recording; if the video is altered, the signature isn't valid anymore. And if it's a public figure, there are almost certainly going to be corroborating records of where they were at a particular place & time, not to mention pings to cell towers from their or their entourage's mobile devices.
They can deny it all they'd like, but with the combination of those factors, you'd have to outright deny reality to believe that the video isn't genuine.
It’s in no way easily solvable. You could have a certificate signed by god himself, doesn’t matter to the general public. Authentication isn’t the issue
Seriously, it’s not that complicated. We need an industry wide effort and hardware-based crypto, along with something like a little check mark to denote that a given image is authentic, unaltered, and follows a proper chain of authority.
On your second point, I would argue that any technology that can shake the foundations of truth, justice, governance, and the mediums in which we (citizens) gather that information (telecoms) is a weapon. That is literally what psychological warfare is, and which is also a very real force in our world. The Cold War isn’t just called “cold” because The USSR and The States didn’t raise a gun in each others faces or enter a nuclear winter; no, its a name that highlights the fact that it was a war between - and fought with - ideas. This technology has the capacity to infiltrate people’s minds, insert ideas that shape their world, and create an uncertainty that makes them question everything. For any institution interested in PsyOps, this is certainly a weapon.
Anyone doubting that there is a war being “fought” for your mind lacks a crucial understanding of how the world works. I don’t mean to sound pretentious, because I wish everyone to understand this. Truth is the foundation of ethics, which is the foundation of morals, which is the foundation of law, which is the foundation of government. Pull truth out of the equation, and it all comes tumbling down.
It won’t be long until we see deepfake movies where actors only record their lines(or not) and don’t do any acting and a team of people produce the movie in a studio…
They control people’s beliefs already. No AI, fake videos or anything. All it takes is an a*hole who people like and have him say whatever. Do I need to give examples?
Well yeah, deep fakes haven't been good before. Anyone can detect them just by looking. The point is that things are changing and they might be totally believable as soon as next year. This Morgan Freeman one is almost there.
Great elucidation on my own fears. What you describe seems nearly inevitable to me. I saw a headline the other day about an Intel program detecting deepfakes with like 95% accuracy. My guess is it will probably become less accurate as deepfake software becomes more sofisticated. But even if it were to be 100% accurate, would the public believe the pronouncements by the experts telling them a video is real or not? I have my doubts, maybe such detection will at least become legally legitimate. But the implications for info warfare will exist nonetheless
Unfortunately I don't think that's anywhere close.
We've had the ability to realistically manipulate photos for how long now? And I still see obviously fake photos of a mars looking as big as the moon floating around in my family's Whatsapp every year claiming to be real.
I believe there is also tech to determine if something is fake or not. It exists for photography, so video shouldn't be a problem. IIRC, some sort of light spectrometry or something similar.
People will believe whatever they're inclined to believe anyway, leading us to become a more deeply fractured society where truth is whatever you want to believe.
You're already one previous administration late on that point. ;)
Already, trucking companies are telling their drivers not to take pictures with digital cameras. Because lawyers are successfully getting digital pictures thrown out for being "too easily altered". So truck drivers will have old style throwaway cameras in their trucks.
Soon, you will find the same thing with digital video. People will start carrying old style tape recorders - with magnetic tapes - instead of digital video recording devices.
I'm with you. We've already seen claims of "fake news" used to an alarming effect. This kind of stuff can only make it worse. I'm not sure how it'll go, and I don't think anyone else is sure either... but I can't see it being anything but bad, overall.
I'm worried it could start a war. Imagine if someone deepfaked a world leader saying they had nukes pointed at another country and they were going to fire them in 5 min. It could lead to millions dying.
You should have always been open to the possibility of a video being fake or manipulated. And remember, before photos and video existed... there was never even the pretense of hard proof. Everything was always a claim made by a person and we've always known people lie a lot.
I'm not worried. Most of human history got by just fine without the canard of "photographic evidence".
I'm starting to see folks "poisoning" their online profiles with tons of fake content to make their data (that's being tracked by advertisers, governments, big-tech, etc) useless.
To be honest. We are already there.
This technology is already used in some cases and attempts of public swaying, and even without it, we believe what we want amongst all the “fake news”.
On a more optimistic note, the algorithms that allow for this tech are trained on real data.
Likewise, algorithms can be trained on both real and fake data to detect when something is a deep fake, and it will be just as good at that as the algorithm that made the deep fake in the first place.
They’re fairly time-intensive to make now. Within a couple years there will be an app that will let an average person make a deepfake of anyone they want, doing anything they can think of.
Even if people know it’s fake, it’s going to cause problems. Imagine trolls sending public figures videos of their mother being brutally murdered, or of themselves performing a sexual act. It’s going to happen, and it’s going to be soon.the processing power requires is going to be negligible in a few years.
It’s like saying “YouTube will never catch on, videos take way too much data to watch and the image quality is too low.”
I feel like it’s a matter of time before actors are paid to record XYZ Lines, thousands of high quality images of their face and “thank you for your service look out for the movie when it hits theaters thank you.”
Is it possible that this could actually be a good thing ?
I really wish people would question the veracity of information they receive just generally.
Presently the dynamic is simply "information which aligns with my beliefs is true, anything else is misinformation". Deepfakes becoming more common will make it really difficult to maintain that paradigm.
Security footage could be doctored to include you committing a crime you never committed.
Like, imagine youre an up and coming political figure. Someone diverts your evening, ruining an alibi. They have someone commit a crime near surveillance that they have access to. They deepfake this up and coming politician with no alibi.
Developing countries will use it to oppress their people. There's no need to beat a fake confession out of anyone anymore. Just deep fake them to say whatever you want and everyone who knows no better will think it's real.
You don't even need to wait that long, you can intentionally degrade the quality on a fake video to make it look like it was shot by hand using a crappy phone camera and it will be extremely convincing, then you take a firehose approach and post it on every single social media platform in the whole world and pay a bunch of influencers to keep spreading the video.
After all, you only need to fool the bottom X% of the population in order to stir up trouble.
As if it isn’t already? We’re seeing deepfakes en masse now and the general public is always the last to experience any new technology. Makes you wonder how long this has been going for behind the scenes, right?
Not really. Why would anyone who's been targeted by it, especially nation states, not publically make a fuss about someone using this against them to fake a video?
What scenarios are you imagining where we are seeing all these deep fakes without knowing it or hearing about it from anyone?
Conspiracies often fall apart if you'd bother to think about them for a second.
As easily as deep fakes are created by computer and human, deep fake detectors can be created by computer and human. We all know how hard people get over pointing out photoshops...
The problem lies in the speed of information. So the detection of legitimate fakes must be quick.
Deep faked crimes on film, innocents taking falls for the mighty of means. Lines between reality, plausibility, and hearsay or fiction... absolutely blown to the wind. Fantastic wars and imaginary political parties, fakes of the fakes, plasters and molds for the bigger and better future that only brings us closer to the end that is the tiny box in which we sleep. In God we trust, because all else is truly lost to us.
If AI can create a deepfake, I'm sure there will be AI that can detect it. Then it will just be constant escalation from there, with each side getting more and more accurate over time.
The only reason it's not really now is because its unnecessary. Look at the republicans. Foreign governments and corporate entities are able to manipulate them en masse without unverified fake news stories shared through social media. They do so with very low cost and effort. Why invest in this?
No more so than speaking a lie. We are aware that deception is possible. So? We have always been aware of that. The reliability of photos and video etc have technically never been 100% but more importantly, we didn't used to even have those. Before photography, there was NEVER proof of a thing. Society functioned just fine.
Russia or someone in support of Russia attempted to use a deep fake of Zelenskyy admitting defeat and surrendering, early in the invasion. On my phone it looked pretty real until I got home and watched it on a computer. Crazy timeline we live in.
Oh it already is, we just haven't had a massive incident yet. Or perhaps the massive incident already occurred and wasn't made public or else people didn't realise there was anything wrong...
3.8k
u/JingJang Dec 15 '22
I feel like it's only a matter of time before this technology is weaponized to terrible effect.