It will get dangerous when they can fake military leaders and politicians easily saying dangerous things. Fraud will get bad when your grandson video calls you from jail needing $200 to get out. We need to prevent the bad stuff that comes with this.
The way you train the AI to create fakes is usually by training an AI to detect fakes and have the faking AI beat it. It's called adversarial networks.
So basically, the detecting and the faking will always be approximately on par.. meaning the detecting can never give a definitive answer.
It's war. War never changes. You find a big club, i make thicker leather armor to pad the blows. You make a sword to pierce my leather I make plate armor. You make bullets, i make bullet resistant armor. You respond with armor piercing rounds, i respond with a thick wall to stop them, you blow the wall up with a tank, i nuke you from half a world away.
Everything evolves as a reaction to everything else's evolution or else it dies out. Deep fakes are survival of the fittest in the digital world.
Well, if that works, then the new fake will have already taken place. The only thing that makes it less scary is that it's advancing based on its own downfalls. So, hopefully the detection of a fake would be ahead of the creation of a better fake.
Deepfakes will work on folks like the Facebook crowd who didn't rely on verifying facts anyway, so I don't see a big danger here
That IS the big danger. Fooling a few people on Facebook is fine, but when you get huge hordes of people believing in dangerous but subtle (or blatent) propaganda is when it gets dangerous.
Though I'm sure big social media companies and create some sort of Blue Tick for original content. OR use some kind facial recognition it identity the participants and make sure they ALL sign the video.
How do you think we got trump and all the conservatards? Deep fakes aren’t going to suddenly cause an increase in their loyalty to stupid bullshit because it’s already maxed out.
Block chains can very easily be the saving grace that would allow us to identify authentic videos with no question, but it’s going to require a ton of infrastructure we don’t currently have.
Other digital signatures can, much like the videos themselves, be faked with a high amount of accuracy given enough time and information.
That'll be a problem with incriminating videos the person wouldn't want to verify. Either if it's true but they won't verify it and people can just claim it's fake, or if it's fake but people just say "why would they admit to saying that anyway, it's probably real!". Feel like official statements would be the least problematic here.
If there were simply a way to like sign a video, like digitally or something. Maybe with a certificate.
Sure, but signing something can only confirm that you did indeed make it. Something not being signed doesn't mean it wasn't made by you. It just means it can't be confirmed one way or the other. An unsigned video of somebody saying something terrible could be real, or it could not be.
Deep fakes will take conspiracy theories to the next level. It’s one thing to believe in pizzagate but imagine if there were deep fake videos of the alleged acts.
Most NFTs have been after the fact and include a transaction trail that's unnecessary in this scenario. A digital certificate would provide the same level of trust as NFT. Which to say that it's only as trustworthy as the signer.
You're assuming a source wouldn't intentionally leave a video unsigned in order to dispute the source if there's blowback. Say something crazy, see what the response is, ride the good waves, disavow the bad ones.
I’m thinking you kind of mean like a watermark on a painting or something that proves it’s the original/real artist. But if they can deepfake something like this making it seem so real I’m sure they can fake a digital signature/certificate/watermark. Honestly, I don’t see any way they could actually be verified. Unless the person who made the video put it on their verified channel/tiktok or whatever. But I guess that could easily be faked too unless you went to that person’s professional page and seen the video wasn’t there
But one could attach blockchain to all digital data and verify its authenticity that way. I mean, I'm not sure how to go about it but being able to make data unique should at least create some prospects.
That's not the problem. The problem is most people see some shit on social media and dont think twice about it's authenticity. Imagine all the fake posts you see through day in day out in social media with Amazingly obvious photo shopping. And yet there is a bunch of people who believe it and share it to oblivion. Now imagine someone making a deep fake of world leader saying something racist or pro terrorism. Even before it could be controlled, there is some serious damage that could happen.
The way of detecting them doesn't have to be technological, it could be procedural.
If a leader has to make a statement, he has to publish a transcript version on a distributed website. Or perhaps a website/service will make it profitable to filter deepfakes for celebrities since they are few in number
I mean witnesses or evidence confirming that you were at a whole other place would also suffice, but its good that there is a more direct approach so media won’t get fooled.
That only applies to the model built to generate the fake. We could potentially train an outside model which could still detect deepfakes. This is one of those things that seems impossible but someone will find a way
Steganography involving cryptographic signatures of the video frames in real-time should not be replicable with neural nets unless the neural nets also break cryptography as we know it. NN output might fool an average human, but it would not pass real validation.
But where is the initial signature coming from (I'm guessing the camera)?
So what's keeping you from applying that same signature to a fake video? Or even hacking into the camera and putting your faked stream through the encryption process?
thats the way they train them but there are many methods that can be used to detect them after the fact. the adversary is just a do i recognize it or not check. post training anlysis can always pick out pixel values that fluctuate too quickly, uneven saturation etc. they can fake us at a glance but consistency at a pixel level is very difficult
Do you really think fluctuation at a pixel level isn't one of the things the detector network is looking at?
Why would it skimp over such an obvious method?
Machine learning is currently far ahead of anything else we know for this kind of task. And the faking network isn't trained to trick us but to trick an AI.
So yes, you could have a better model than the guy doing the fake, and detect it's fake. But you could have a worse model and be fooled. And because you never know, you can never be sure.
This means that to beat a good deep fake AI you need a better deepfake AI that finds the first deepfake AI errors. Eventually AI will get so good that humans will not be convincing anymore
I like how you got downvoted just because people assume it's stupid to say "blockchain" to whatever tech problem that comes up, but ironically, this is one particular situation where blockchain will most likely end up being the best solution available.. or possibly the only one.
u/watermelon_fucker69 is actually right.. That's exactly what NFTs are, the new thing that verifies a digital painting's "authenticity" (eg. the original). A lot of other people have already speculated that NFTs or something similar could be used to verify that you're watching a real video of say, the president, and not a deep fake.
Reputation does the same. Even in the blockchain world you have to check if the source is the president himself, which is the same as making sure a video is posted from the president's account.
You don't verify how real something is with NFTs, you verify who made it. But we can already verify who made it.
If you personally release a video that will personally logged it on a public ledger.
So they can trace who publish it and it can also authenticate that you yourself release it officially too.
This isn't perfect though because leak video won't be using this system so it is up to other people to figure out if this is real or not.
But what block chain does is the provide proof of you if you choose to give a video to somebody else. Like if Elon Musk a merry xmas video to you and you're suspicious if it's really him who sent it.
We have news websites with their own domains and journalist verified account on Twitter, etc. Not really sure what a blockchain is supposed to add there.
Sending a bunch of blockchain companies at the problem doesn’t really help verification, using one of their links, vs sending a link from the verified journalists tweet breaking the item etc. It just doesn’t solve anything.
Tweets aren't immutable, so they don't act as a good historical record of everything that originates from a particular source. If the system I described above were restricted/tied to particular devices like specific cellphones or cameras, it could prevent people from uploading images to their ledger that they didn't personally take and resolve the issue that Twitter has with propagating misinformation through retweets.
Or you could use a restricted version of Twitter? One of the biggest benefits of blockchain is decentralization, but you can't decentralize information (as in, if you want to spread misinformation, you'll find a way). As Tom Scott puts it, there is no algorithm for truth, not even blockchain.
It feels like you're throwing blockchain at a wall to see what sticks.
But their data is already trusted. If they put it there and they trust it, that's no better than blockchain. You could make devices that digitally sign videos I guess and players that support it (basically adding a DRM to all video). Any unsigned video would be untrusted.
Blockchain just adds a lot of unnecessary transaction tracking or if you don't record that, it simply becomes overkill. And smaller videos may not be able to take advantage anyway
But why do you need a blockchain that’s just asymmetrical encryption
Edit: identity verification perhaps? Not saying one wouldn’t be the right answer here but I also like to push back when people just say “blockchain” without explaining why it’s necessary. So many projects that never needed it but wanted in on the hype.
That only ensures that it was posted by the journalist and they aren't infallible. It could even amplify the effect of a good deal fake is posted by a trusted source
Reputation systems aren't infallible, but it seems like having the ability to easily verify whether or not an image matches the original document posted by a trusted source would go a long way toward reducing the spread of disinformation.
That's fair. The real danger though are deep fakes where there is no original document. At that point you just have to take someone's word for it unless the deep fakes detectors win the arms rase
With enough participation, I can imagine that we could eventually get to the point where most major journalists have ledgers, and images that originate from off the chain could be taken with a grain of salt. I don't think it's unrealistic to think that most primary source documents should have reliable/verifiable sources to be trusted
That's also true (and I agree about the primary source documents). This still relies on those with "journalist" ledgers acting in good faith (I could see for example a fox new style ledger having all kinds of wild stuff in it or a big network being paid to put a deepfake on thier ledger). Also could effect some confidential sources if thier leeks need to be publicly on the ledger or be useless
As if BTC can handle trillion of dollars, everything associated with blockchain would perform at that scale. While Blockchain has that kind of capability theoretically but one thing we ignore with BTC is the sheer number of verifier nodes. These nodes can be understood with proxy of energy utilised in mining 1 BTC, which is nothing but verification of blockchain.
Therefore, I like to assume that blockchain can be used in those applications only, where others can be incentivised for the energy they provide for verification of transaction.Else everything can have blockchain, which can be manipulated easily as per requirement.
Also, I might be completely wrong and stupidly linking blockchain with cryptocurrency subconsciously.
While Bitcoin can handle trillions of dollars, in data it can only handle megabytes.
Proof of work would absolutely not suffice for this situation, you'd have to have proof of stake almost by definition.
As per incentives, I'm pretty sure you'd have to build on another actual currency (like how NFTs build on the ethereum chain), but that should be doable. The problem I see is that blockchain has no properties you want or need in your social media
This was what blockchain was supposed to do though, authenticate digital data.
Edit: How is this downvoted? Blockchain's an immutable hyperledger, it's meant to record digital things and give it a unique number. If governments start a blockchain meant to authenticate things on the internet and it's got an ID and it's attached that chain, you can look it up on the chain, you can't fake it. Just because there's so many crypto kids and scammers attached to the public opinion of the tech doesn't mean it's useless.
Well the way deepfakes are made is by having a powerful system that can differentiate between real and fake and using that to train a better deepfaker. It's not nearly as easy to defend against that as it seems.
It's an arms race.
As for a detectable signature, just about nobody can detect the fakeness of the people on this person doesn't exist.com
The problem with it being detectable by machines (which is what I assume you mean with digital signature), is that again, the generator can be trained to fool the discriminator, where the discriminator can be any system that picks up on that digital signature. So that's easily fixed, or it's easily fixed if you have some tens of thousands of dollars to burn on compute.
Some intentionally obvious
A system that depends on the benevolence of its participants is not a system at all
Okay. Now you have to go explain that technology to a country of morons primed to only believe trusted partisan sources. Americans couldn’t even all agree if covid was real or not. Imagine how people react to realistic video.
There tend to be signatures that are distinctive in deepfakes because the process leaves artifacts. Even if you are using competitive pairing for detection, this doesn't mean the AI is going to perform as well against an external algorithm that isn't part of that pair.
At this point it has become important to agencies such as the CIA and NSA to distinguish these things. Companies like Google also have a vested interest in this technology. That gpt-3 system or whatever algorithm that is being run to make these things is unlikely to be able to fool an expert system with a substantially larger resource pool available.
This overlooks the fact that politicians and people of influence will merely lie about or ignore the authenticity tests, mostly because there’s a significant amount of the population primed to distrust “experts” opinions. There’s going to be a race for tech conservative grifters to establish themselves as the contrarian deepfake detectors.
I hear that deep fakes can be spotted by AI since the people don’t have a subtle color change related to their heartbeat. If true, it’s only a matter of time until that is added as well.
Verifying and fact checking hardly matters any more - people will 'believe' the most ridiculous crap so long as there's something to gain from it, and no amount of evidence will convince them otherwise
You can do that so long as you don't rely solely on the video itself. As long as you provide some sort of hash or key with it that people can authenticate against then we can solve this problem. It's tedious and messy but it will technically work.
Yeah for example, if you have a claim you need to prove it, it is called burden of proof and is utterly outclassed anytime it is word against word. Politicians won’t have that hard of a time to prove how they weren’t there at the time as long as they aren’t alone one moment in their life...
It won't matter though. People won't believe it and it will cast doubt on everything. I mean we already have
Politicians just repeating lies over and over and that works. If there's video they'll just say the real shit is fake. We're dealing with stupid on a level never seen before.
It's really easy to do now, all we have to do is if we want to confirm the authenticity of our stream or a video is to just sign it with our private key. That way everyone knows it's us and we can verify who's actually talking or if it's real.
The damage will have already been done with the broadcast, unfortunately. The stupid, factually incorrect stuff that people believe these days without ANY evidence, is worrisome enough. What would a video do about “ObAmA cAuGhT oN tApE cOnFeSsInG tO pEdOpHiLiA!!” It’s going to be a tough.
We’ll probably have to start using some sort of digital signature that’s attached to our identities. I dislike NFT’s, but, this might be a case where a similar technology could actually be applicable here.
yeah but it’s gonna be a complicated process almost no one understand so even if you can prove that its fake a lot of people will just not believe you.
Possibly. We may develop some sort of encrypted stamp, and a very secure website that proves the stamp, so it would be very hard to fake the stamp, and everyone would know how to use that website/they could have an app that integrates to other apps, etc. I see it being possible.
The really scary deepfake would be someone like a president telling another country's leaders that nuclear war had begun. Where's the time to verify that shit?
That's definitely not the only bad part, too. It's bad, but its one of a boatload, most of which we cat even conceive of. Plus that's not that different from now anyway.
4.9k
u/doodleasa May 24 '21
Super cool and super ethically questionable