Was discussing this issue at a philosophy group meetup this morning. Here's an extreme but not impossible situation that could happen from AI misinformation. India and Pakistan have been locked in a conflict ever since the partition. India has accused Pakistan of supporting terrorist attacks such as attacks in Mumbai in 2008 that killed over 175 people. I've read before that there are people in the Indian military that believe that given how much larger India is that they could possibly survive and even win a full on war with Pakistan including with nukes. Extremists in India use AI to create a very convincing looking fake video of Pakistani leaders meeting with Islamic groups like Lashkar e-Taibi pledging support for them and planning large scale attacks in India. Pakistan denials and even debunking are seen as self-serving and not accepted. This leads to India going to war with Pakistan. Since Pakistan has close ties to the PRC and the PRC also has a long standing low level conflict with India it comes in on the side of Pakistan. The US tries to mediate but get's drawn into the conflict to counter PRC aspirations to seize territory in India. This leads to WWIII all from misinformation.
It seems like you have a Civil War fantasy. Respectfully, propaganda is one if the oldest forms of weaponry. And everyone uses it. Most wars are fought over this. You can't prevent it. As people become more intelligent, more sophisticated ways are used to trick the "less intelligent". The best defense of propaganda is to stop and listen to people, not find more sophisticated ways to silence your opposition.
? Yes that should be the way to do it given even what we see on Clutchfans that doesn’t appear to be working.
I am much more concerned about AI deepfake voices now than before. Planet Money did an 3 part series on AI about a year ago in which they had chatGPT write a Planet Money episode and an AI generated Robert Smith to voice the episode. The final result was impressive and fairly convincing, but it required a skilled person to build the AI voice that used a couple of hours high fidelity Robert Smith recording to model off of. So I thought it was still a niche use case. I am absolutely floored that just a few months later, a complete amateur was able to make a convincing ai voice recording to cause real harm. Luckily the perpetrator was a complete idiot and used the school's computer to google search how to make a fake ai voice and linked his personal email and phone number to the file. Soon in the future (maybe now?), it would be possible to create a completely convincing audio clip by someone that know what they're doing. Here's AI generate audio clip of the principal btw. If you go into the clip knowing it's AI, then it does sound a little bit weird. But I would not have had given it a second thought if it was just some random recording without any pre-conceived notion of going into it.
You'll need to discuss with your loved ones about confirming signals like a passphrase or favorite memory. Something like asking if poochie the dog is ok if that person is hurt or kidnapped and having them respond. Hanging up and calling back might not be enough if they're sufficiently motivated. Pseudo spy ****, but almost necessary when your voice can be lifted after 5 seconds of phone time.
Maybe people will start DYR instead of going off a 7 second clip. More follow up questions and less gotcha comments will be better for all
The problem is that eventually these faked media will be so good that DYR is limited beyond looking for 'experts' giving their take. It would be so easy to fall into an echo chamber and only see the takes that affirm your own bias and opinion. In a coordinated attack, the 'experts' could be fake too. With how much people post their own video and audio on the internet, it would be trivial for a bad actor to sow distrust with fake video and audio of real and fake experts. How could anyone trust anything they get from the internet, MSM or otherwise.
Voice and face cloning by AI can be done with just a few seconds of the original voice and a few snapshots of the original person. It is already possible and will become more accessible and easier to use for the average user.
Outside of a legal case, I can't imagine where having a discussion and follow up questions wouldn't fix the problem. Video and audio content should be treated as hearsay. If you want trusted sources, you go straight to the source. People have relied on MSM way too long on what is legit and what isn't. YouTube or any other video distribution site is not a trusted source. Their personal verified account is.
The problem is that fake voices isn’t the only problem in this society. It is the speed and ease of transmission of information. It is that people are not only easily outraged but look for reasons to be outraged. Why DYR when you can repost something you saw on X that is outrageous and makes those you don’t agree with politically look bad. If it’s not true or what it seems it doesn’t matter as just move on to the next thing that seems outrageous.
Real personal accounts can and will publish AI-generated content that is fake. It's not simple to also know if a personal account is verified to belong to a real person. Unless you know the person, know their reputation already, or the system has a real person verification process, you cannot trust that either. On Twitter, for example, real person verification was removed in April 2023. The checkmark is simply for premium subscribers. Forget the trustworthiness of the posters. Content in the new reality of AI-created content being indistinguishable from real needs to be marked/labeled/signed/etc. Trust those that are "labeled" (e.g. positively real, positively AI) and be very cautious of the ones that aren't.
We shouldn't care who posted the information. That is irrelevant. Its been obvious that 'reputable' people share information for ad clicks. Instead, if someone has question on a video, they should ask the person in the video for confirmation or clarification. If the content is unverifiable for whatever reason, it should be treated as nothing more than a rumor. Video content should no longer be considered 'proof' as idle gossip isn't considered proof. Somewhere along the way, this notion that a person who has the 'latest' information is considered the most informed. Its quite the opposite. Those who are following 'breaking news', regardless if its X or live MSM news, they are usually very poorly informed. Its those who follow up to the story after the media sources farmed all their ad revenue are the ones who are the most informed. This is why those who watch Fox, CNN and MSM are often highly misinformed. The full story comes out much later.
I mostly agree with this. I now try not to bother until a day later. Most MSM news, though, has at least some semblance of trying to be accurate. Platforms like X, FB, and other instant social media news? A good number of them do not even attempt to be accurate, and another good number of them consistently push partisan narratives with no regard for accuracy. Except for extreme weather coverage, I ignore both 24/7 news and social media news. I try to read more than watch, and I try to source from local papers and news first. Nationally, it would be the likes of NPR, CSM, and major newspapers.