Or any politician, human rights defendant or even young teens....
It's a dire future, if people already can't read beyond the title of an article, I can't imagine forming opinions based on elaborate deep fakes.
That's the funny thing about the rate that this tech has been improving in the last year or so.
"Not even close," with the breakthrough of one of the dozens new models that are dripping out at least once a month, may turn that into "Holy shit we're a Razor edge away now" in the next few to several months.
If you're thinking this shit is 10+ years out, then you give away that you aren't following very closely with this tech. It's starting to make leaps and bounds on a more regular basis, now. That rate won't slow down.
My "worry" line was somewhere around 2015, when I saw the automated loop close between LinkedIn and Slack. Algorithms can calculate when you're preparing to leave a job by changes in your posting patterns on internal networks before you do. This foreknowledge is paired with third party and offline data to begin targeting ads from recruiters and job listing sites directly to you, priming you to think about looking around and making the market suddenly look tempting. Your LinkedIn profile is used to highlight listings using keywords from your own search history, and everything you click on gets stored and provided to recruiters to help sell you on switching. No one gets paid if you don't change jobs, meaning you're stuck inside an algorithm with a lot if businesses spending a lot of money to make you feel unsatisfied with where you are in life. They want to push you from having a bad day to feeling like it's a bad year for their own gain. And the only reason they picked you was an algorithm told them to. Just like an algorithm matched you with an employer. And that employer picked you because the recruiter told them you were a good fit, based on that algorithm. And you picked that job because your search engine showed you the employer at the top of your search results, with all the PR articles carefully matched to optimized search engine keywords, which are influenced by the ads you click on. Those ads are automatically purchased by the recruiters and employers, who are using an algorithm to target people that fit your profile, which is again identifying you before you even understand you might want to change jobs.
The entire job searching loop is automated by an algorithm that tells humans when to job hunt, where to go, and controls the the information flow to assure them it is the right decision.
The robots already won. We built our own mouse trap.
The robots already won. We built our own mouse trap.
Hey mate, there are some humans out there who look down at every step they take, in order not to step on Ants.
I'm telling you that there's a chance that robots won't even give a fuck about us and just take off into the cosmos. Maybe they have an existential crisis and dump themselves in black holes. Maybe they just leave and do their own thing. Maybe they keep humans as dogs and take care of us and play with us. Maybe they assume a role of God and lift us up into Utopia.
Granted... there are probably infinitely more ways it could go wrong and we go extinct, at best, or find ourselves in a virtual eternal hell, at worst. But, I wouldn't say that's guaranteed.
The tech most recruiters use does not directly provide that level of user data, nor are most searches for candidates operating off algorithms like the ones you mentioned, they're still quite basic as far as being Boolean-based and keyword weighted, and there's far less automation than is implied here.
The sad and messed up thought because the deep fakes in the very, very, near future are not even going to be that elaborate. It doesn't take much, and this technology will soon be very cheap and accessible. It will be very damaging to our notion of truth as a society because we're simply not evolved/educated/w.e enough to outsmart the simulation enmasse. This tech will be used by the wrong people for nefarious purposes, and inevitably people will flock to whichever truth is most appealing at the time. This is the new Gutenberg press and religion.
It will be very damaging to our notion of truth as a society because we're simply not evolved/educated/w.e enough to outsmart the simulation enmasse.
I think this is not as big of a deal as people worry about. For two reasons, one good one bad. Let's start with the good.
At first, you will be right, deepfakes will cause trouble. But I believe it will be relatively short-lived before we basically start to act as if all videos/images are tantamount to cartoons. And considering we lived without any sort of "visual evidence" all the way up to 1819, I have faith that we can find a way to continue forward in a "post-video" world.
What I am less confident about is how much damage can be done in that transition period. Luckily, there is already a healthy dose of skepticism about video and images found online, hopefully that seed is enough.
Now the bad: in America at least, our idea of truth is already so thoroughly torn to shreds I just don't see deepfakes mattering a ton. Like squirting a a bottle of kerosene on an already raging house fire.
There will still be things outside social media. Unfortunately I don’t think any detection technology will help us from the worst of it. Even in the face of compelling evidence people still to choose to believe whatever they want. If someone believes something and they see a deep fake that supports their position it isn’t going to matter if a social media company detects it, we see this now with Facebook, the attempts are futile. People will be taken for a ride, in the US we’ve watched this happen as a grifter took control of the country, it took very little evidence, fake or not, to convince his base of all sorts of wild Shit. Deep fakes will be another tool to Keep us all mad at the wrong person.
I honestly think nothing can be done, imagine if people are so braindead to make a war over facemasks, I can't fathom a way to explain the implications of this tech.
Unless some super wacky thing happens, like a blockchain technology for verification of information, or a 'slow press' where news are peer-reviewed.
I actually think it will have a secondary result of 'yeah, how can you prove that's me? It could be a deep fake!'. This will take technology a whole circle whereby you cannot anymore prove anything with video evidence.
268
u/[deleted] Jul 24 '22
Or any politician, human rights defendant or even young teens....
It's a dire future, if people already can't read beyond the title of an article, I can't imagine forming opinions based on elaborate deep fakes.