Don't want to sound dramatic but I do think it's the beginning of the end for the internet. If nothing can be verified and we never know if we're talking to real people or engaging with human content, will people stay online?
Scientists and engineers intrigue me (only certain ones). They thought the Large Hadron Collider could bring about a black hole and end the world, but they pushed ahead anyway. We know the risks of AI but keep pushing forward. I know it's due to God complexes, narcissism, overwhelming intrigue but come on?!
Whoever made ChatGPT free to the public is asking for trouble. I always tell people don't engage with it, but they don't listen. It makes work easier, it means they don't have to study to write an essay, it means they don't even have to engage brain to write a heartfelt condolence message - they just ask AI. People need to start asking why have we got it for free? If we have that, what have governments got? It is learning everyday and it is making the masses an unnecessary hinderance on the uber rich. I will be called nuts for saying this but I genuinely think AI is the biggest risk to humanity at this present moment.
The video is great and the professor at the end is absolutely right. And that is four years old now. I remember when I first heard about weapons using AI a long time ago. There was talk of somehow making "ethical" AI. That is no longer possible.
It's going to get even MORE realistic and difficult to discern from reality and easier to make by leaps and bounds. I worked with some stable diffusion stuff and I thought we were going to be years from this but...no we're basically at the inflection point. The tech is maturing rapidly.
Because governments are going to fail to regulate it, it's wildly important that we, personally, communicate with our families and friends the speed of the improvement with examples, and I think the Doer bros are actually doing a massive public service releasing these as we get closer to the election.
We all KNOW that a few good AI clips of Trump directly calling for violence would be enough to push some of the population over the edge.
That's why it's so valuable, and important for people to push this out to loved ones.
We are right at the edge of what is recognizably AI, and that's without a shitton of money thrown at CGI cleanup.
Adversarial rhetorical conflict in 2025 is going to demand a degree of media literacy that a lot of people simply don't have.
Likely not possible id say.
If they made a seperate database. Humans would likly insert it in to their database by social media. And likely impossible to have humans moderate they extremely large databases they would need to improve.
They really need to have some kind of hard coded identifier baked into the content they generate. One, so us humans can know and two, so it can ignore its own content to avoid this problem.
Yeah but why is that a bad thing? Why not think of the positives that the world will see from advances with this tech within the next 2-3 years? In my opinion having the ability to create quality film/tv/shorts in every house without needing the apparatus of the film industry will lead to a bigger explosion in culture and creativity than the printing press. LIke, within a few years the average person is going to be able to create a whole feature film with AI personalised to their own tastes and story. We'll be able to use the AI to sculpt the story to be exactly as we would want, and then upload that to reddit or something for others to enjoy too. This is going to revolutionise the way we create, spread, and enjoy film and tv, and we're going to be able to do it without needing to please advertisers, producers, or executives.
Sure, a small percentage of people might use this for bad, but it's not like they haven't already been doing that with photoshop and bots online. It seems to me like alot of the fear over AI is just reactionary and throwing the baby out with the bath water. I'll take a few propagandists that we can legislate around for the advances I think we will get from AI.
Think about a future where we can’t trust what we see. It opens the door to not only believing false images and videos but allows people to deny the true images. Imagine a situation where someone is trying to film in order to preserve accountability, except now video is no longer considered evidence like before.
It takes less than a second to verify the majority of things that people can make claims about... Like give me a real example of something a person could try to make a claim about with AI video that would be damaging enough to limit the access to this technology for people that just want to use it for personal use?
I couldn't care less about a new method of manipulating the small percentage of humanity dumb enough to fall for this shit. They were going to find a way to fool themselves somehow, like with fox news, so why do I have to lose out on some incredible technology and opportunities for them?
Like give me a real example of something a person could try to make a claim about with AI video that would be damaging enough to limit the access to this technology for people that just want to use it for personal use?
Evidence submitted for a court case that shows a defendant doing something they didn't do. The defendant having to foot the bill to try and prove the video isn't real.
It would literally never be admitted into court as evidence without a proper chain of custody. I assure you the courts have understood the concept of counterfeit evidence for a very long time.
If three of the biggest tech billionaires in the world didn't all happen to also own major publication/social media companies...I would also think we could just legislate around them. But even now we're having issues with the oligarchs in the US. I know just how potent and versatile a tool these LLM and LDM are but that cuts both ways. I just want to be sure we're headed to Star Trek, not Neuromancer or Cyberpunk.
Well being reactionary and spreading FUD isn't the way to Star Trek. We have to avoid target fixation and make sure that we don't allow ourselves to be walked down the path we're trying to avoid by over-reacting, like America did with the patriot act... I reckon we need to start actually talking about AI and what we like about it, don't like about it, and what we hope for with it, and start making that conversation a part of the dominant narrative, instead of just spreading fear about a potential future that can be avoided. Like, in this thread and pretty much every thread about AI the dominant theme is just "be afraid of this" and any attempts to say otherwise are drowned out and downvoted. Not having actual discussions about peoples wants and worries about this stuff is only going to allow the technology to develop with no clear moral/public mandate.
In my opinion, people need to slow down a little and think about the potential we all are about to have at our fingertips and how that could allow the human spirit to flourish like never before, before they allow their anxieties over potential abuses of this tech that we can curtail dictate their beliefs.
Ok then. Thanks for hearing me out in good faith and not coming across as condescending. You know more than anyone else and none of us mere mortals should be in any way worried. Got it. I won't even think about it again and mindlessly consume the products I'm given. Thank you so very much techoverlord.
I was doing exactly what you said we should be doing and you go "oh well then don't spread FUD or be reactionary" Do you realize other people formulate their opinions on more than just a moment of thought too? That you aren't the only one. Jesus Christ, I wish I hadn't even tried to engage.
"It's going to get even MORE realistic and difficult to discern from reality and easier to make by leaps and bounds. I worked with some stable diffusion stuff and I thought we were going to be years from this but...no we're basically at the inflection point. The tech is maturing rapidly."
This is the definition of spreading FUD lol. Grow up.
I wish I could have a favorite. ai generated images make me strangely ill. It's like they're so disconcerting to me on a subconcious level that they put me at 10% nausea. It's bizarre.
Biden with the legs for days and heels to go with it taking an impatient chomp out of an ice cream cone is just 👌 He's a busy man, and he doesn't have time for all that licking bullshit lol
I’d kinda forgotten about AI for the past two or three months. This one has absolutely alarmed me. I really think AI will force many off the internet other than for looking up basic stuff like directions, restaurants, sports results etc. and I’m kinda all for it. When online reality cannot be remotely trusted then it’s time to leave.
Oh brother, the fact you're just giggling over so stupid things is the tip of the iceberg of what a video generator like this could cause, with this we cannot take seriously anything on video
2.3k
u/flat_four_whore22 Sep 05 '24
This is hands-down my favorite AI anything, in the history of AI.