1. Welcome! Please take a few seconds to create your free account to post threads, make some friends, remove a few ads while surfing and much more. ClutchFans has been bringing fans together to talk Houston Sports since 1996. Join us!

ChatGPT's political bias

Discussion in 'BBS Hangout: Debate & Discussion' started by AroundTheWorld, Dec 12, 2022.

  1. fchowd0311

    fchowd0311 Member

    Joined:
    Apr 27, 2010
    Messages:
    55,682
    Likes Received:
    43,473
    Yes articles I read because I wasn't there.

    Same as how you would obtain information about pre-Nazi Germany because I'm assuming you aren't over a 100 years old.

    And yes the far right since Nazis have cried wolf about their ideas and speech being suppressed from Hitler to George Lincoln Rockwell
     
    dmoneybangbang likes this.
  2. durvasa

    durvasa Member

    Joined:
    Feb 11, 2006
    Messages:
    38,893
    Likes Received:
    16,449
    Did you mean unavailable to public dialogue for non-legacy media or legacy media?
     
  3. FranchiseBlade

    Supporting Member

    Joined:
    Jan 14, 2002
    Messages:
    51,813
    Likes Received:
    20,473
    I was asking you or anyone else upset that the AI commented the way they did about the lab leak theory and trans females. What would you rather the AI say about those issues?
     
  4. AroundTheWorld

    Joined:
    Feb 3, 2000
    Messages:
    83,288
    Likes Received:
    62,281
    That's a very generic question. I posted the article to point out some issues with this type of "AI" and put that up for discussion. Basically, the "AI" will say what it got trained on. It's a language model based on reinforcement learning. So, to simplify, it will spit out what it has been fed.

    The question becomes relevant when this type of "AI" becomes more widespread and, e.g., government starts using it towards citizens. Then the question becomes...who decides what this gets fed with. And how do you regulate how the update cycles work (e.g. once there is clear evidence that what the model had previously been fed with is incorrect, how do you "get it out of the system" again, etc.).
     
  5. FranchiseBlade

    Supporting Member

    Joined:
    Jan 14, 2002
    Messages:
    51,813
    Likes Received:
    20,473
    You would need to feed it with different information. I was curious what different information on which you felt it should be trained?
     
  6. AroundTheWorld

    Joined:
    Feb 3, 2000
    Messages:
    83,288
    Likes Received:
    62,281
    And my response was that it doesn't really matter, these were two random examples. The bigger question is how to ensure that the training done is fair and balanced, and how to ensure new proven insights and evidence replace previous information which turned out to be incorrect.

    As we see now, many things which were branded "conspiracy theories" by leftist fact checkers and journalists have actually turned out to be absolutely true. But these texts will still be floating around, and there might be more of them than the new information which proves they were wrong.

    So if you have a fully automated system of reinforcement learning, how does it account for new evidence being proven to be true?
     
  7. FranchiseBlade

    Supporting Member

    Joined:
    Jan 14, 2002
    Messages:
    51,813
    Likes Received:
    20,473
    Are you saying it is absolutely true that Covid19 came from a lab?
     
    fchowd0311 likes this.
  8. fchowd0311

    fchowd0311 Member

    Joined:
    Apr 27, 2010
    Messages:
    55,682
    Likes Received:
    43,473
    Conspiracy theories are also possible explanations that aren't confirmed yet but spreading it as matter of fact and confirmed. That is misinformation.
     
  9. AroundTheWorld

    Joined:
    Feb 3, 2000
    Messages:
    83,288
    Likes Received:
    62,281
    How would I know that?

    I'm saying that calling it a conspiracy theory is wrong.

    And there appears to be strong evidence for it being the case.

    But again, that is a separate discussion from the topic of this thread.

    https://www.foxnews.com/world/uk-government-covid-origins-wuhan-lab

    UK government believes Wuhan lab leak most likely COVID-19 origin: report
    Wuhan lab leak theory continues to gather steam as potential COVID-19 origin

    The United Kingdom's government is increasingly reassured that the coronavirus pandemic was the result of a lab leak in Wuhan, China, according to a new report.

    While the theory that the coronavirus was leaked from the Wuhan Institute of Virology was dismissed by world governments early into the pandemic, evidence continues to trickle out supporting the claim. Government officials in the U.K., U.S., and elsewhere have begun voicing support for further investigation into the lab leak possibility.

    Now, sources tell British newspaper the Telegraph that the "official view" within the U.K.'s leadership is that the virus did indeed escape from the Chinese lab, though government officials are more open about that view in private than in public.

    "I think the official view [within Government] is that it is as likely as anything else to have caused the pandemic. A lot of people like myself think it is more likely. I think attitudes have changed a little bit. The zoonotic transfer theory just didn’t make sense," Cambridge bio-security fellow Hamish de-Bretton Gordon told the paper.

    FOX NEWS SPECIAL REPORT OUTLINES FRESH QUESTIONS ON WHAT FAUCI, GOVERNMENT KNEW ABOUT COVID ORIGIN

    "There is a huge amount of concern about coming out publicly, but behind closed doors most people think it’s a lab leak. And they are coming round to the fact that even if they don’t agree with that, they must accept it’s likely, and they must make sure the policies are in place to stop it," de-Bretton explained.

    [​IMG]
    Britain's Prime Minister Boris Johnson visits a vaccination hub in the at Stoke Mandeville Stadium in Aylesbury, England, Monday Jan. 3, 2022, as the booster vaccination programme continues. (Steve Parsons/Pool via AP)

    Medical professionals in the U.S. have begun walking back their initial dismissal of the lab leak theory, though many continue to emphasize that investigations into the source of the pandemic are a "distraction."

    Former National Institutes of Health Director Dr. Francis Collins told Fox News on his last day in office that he's "sorry" the Wuhan lab-leak theory has become such a "huge distraction" for the country, despite there being "no evidence" to support it.

    Collins joined "Fox News Sunday" on his last day in office after more than a decade in the agency's top position. The geneticist and physician tapped by President Barack Obama to lead the NIH in 2009 dodged questions about his efforts to discredit the lab-leak theory at the onset of the pandemic, maintaining the most plausible explanation is that the virus spread through animal-to-human transmission.

    "I’m really sorry that the lab leak has become such a distraction for so many people because frankly, we still don’t know," Collins told host Bret Baier.

    U.S. scientists who publicly attributed the COVID-19 pandemic to natural origins rather than human engineering were far less confident in private, transcripts and notes from previous meetings show.

    [​IMG]
    Peter Daszak and Thea Fischer, members of the World Health Organization team tasked with investigating the origins of COVID-19, sit in a car arriving at Wuhan Institute of Virology in Wuhan, Hubei province, China Feb. 3, 2021. (REUTERS/Thomas Peter)

    However, conversations between public officials seem to indicate that some experts may have consciously chosen to suppress evidence that could fuel "conspiracists."

    "I really can't think of a plausible natural scenario where you get from the bat virus ... to nCoV where you insert exactly four amino acids 12 nucleotide that all have to be added at the exact same time to gain this function," Dr. Robert Garry from Tulane's School of Medicine said, according to notes from a February 2020 meeting released by House Republicans.
     
    Invisible Fan likes this.
  10. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,050
    If you pull the politics of this debate one step back, there are points of strong concern that have been plaguing AI ever since it existed. Minorities mostly raised these issues because they were the inappropriate "beneficiaries" of code/algorithms they didn't have a hand in writing or testing.

    We read the paper that forced Timnit Gebru out of Google. Here’s what it says.


    Gebru, a widely respected leader in AI ethics research, is known for coauthoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color, which means its use can end up discriminating against them. She also cofounded the Black in AI affinity group, and champions diversity in the tech industry. The team she helped build at Google is one of the most diverse in AI and includes many leading experts in their own right. Peers in the field envied it for producing critical work that often challenged mainstream AI practices.

    A series of tweets, leaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict over another paper she coauthored.

    ...

    The paper
    The paper, which builds on the work of other researchers, presents the history of natural-language processing, an overview of four main risks of large language models, and suggestions for further research. Since the conflict with Google seems to be over the risks, we’ve focused on summarizing those here.

    Environmental and financial costs
    Training large AI models consumes a lot of computer processing power, and hence a lot of electricity. Gebru and her coauthors refer to a 2019 paper from Emma Strubell and her collaborators on the carbon emissions and financial costs of large language models. It found that their energy consumption and carbon footprint have been exploding since 2017, as models have been fed more and more data.

    Strubell’s study found that training one language model with a particular type of “neural architecture search” (NAS) method would have produced the equivalent of 626,155 pounds (284 metric tons) of carbon dioxide—about the lifetime output of five average American cars. Training a version of Google’s language model, BERT, which underpins the company’s search engine, produced 1,438 pounds of CO2 equivalent in Strubell’s estimate—nearly the same as a round-trip flight between New York City and San Francisco. These numbers should be viewed as minimums, the cost of training a model one time through. In practice, models are trained and retrained many times over during research and development.

    Gebru’s draft paper points out that the sheer resources required to build and sustain such large AI models means they tend to benefit wealthy organizations, while climate change hits marginalized communities hardest. “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,” they write.

    Massive data, inscrutable models
    Large language models are also trained on exponentially increasing amounts of text. This means researchers have sought to collect all the data they can from the internet, so there's a risk that racist, sexist, and otherwise abusive language ends up in the training data.

    An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms.

    It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

    Moreover, because the training data sets are so large, it’s hard to audit them to check for these embedded biases. “A methodology that relies on datasets too large to document is therefore inherently risky,” the researchers conclude. “While documentation allows for potential accountability, [...] undocumented training data perpetuates harm without recourse.”

    Research opportunity costs
    The researchers summarize the third challenge as the risk of “misdirected research effort.” Though most AI researchers acknowledge that large language models don’t actually understand language and are merely excellent at manipulating it, Big Tech can make money from models that manipulate language more accurately, so it keeps investing in them. “This research effort brings with it an opportunity cost,” Gebru and her colleagues write. Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully curated data sets (and thus also use less energy).

    Illusions of meaning
    The final problem with large language models, the researchers say, is that because they’re so good at mimicking real human language, it’s easy to use them to fool people. There have been a few high-profile cases, such as the college student who churned out AI-generated self-help and productivity advice on a blog, which went viral.

    The dangers are obvious: AI models could be used to generate misinformation about an election or the covid-19 pandemic, for instance. They can also go wrong inadvertently when used for machine translation. The researchers bring up an example: In 2017, Facebook mistranslated a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.

    Why it matters
    Gebru and ​
     
    Amiga and jiggyfly like this.
  11. FranchiseBlade

    Supporting Member

    Joined:
    Jan 14, 2002
    Messages:
    51,813
    Likes Received:
    20,473
    Yes. It is entirely possible it was developed in a lab. But I just wanted to clarify that what was called being false information from the AI wasn't based on a misunderstanding.
     
  12. dmoneybangbang

    Joined:
    May 5, 2012
    Messages:
    22,606
    Likes Received:
    14,341
    You’ve never heard that idiom…. “History is written by the winners”?

    Also the Nazis lost…. So suck it Nazis and all their sympathizers.
     
    jiggyfly likes this.
  13. jiggyfly

    jiggyfly Member

    Joined:
    Jul 2, 2015
    Messages:
    21,011
    Likes Received:
    16,856
    That's pretty much what their sentiment was, and then further history unfolded.

    I don't get what your point is?
     
  14. jiggyfly

    jiggyfly Member

    Joined:
    Jul 2, 2015
    Messages:
    21,011
    Likes Received:
    16,856
    What media is that?

    What media has open dialogue with the public?
     
  15. dmoneybangbang

    Joined:
    May 5, 2012
    Messages:
    22,606
    Likes Received:
    14,341
    Seems like folks just want the AI to tar and feather Fauci….
     
    fchowd0311 and jiggyfly like this.
  16. fchowd0311

    fchowd0311 Member

    Joined:
    Apr 27, 2010
    Messages:
    55,682
    Likes Received:
    43,473
    "Alexa, is Dr.Fauci the devil incarnate?"

    "No"


    The BIAS!
     
  17. Space Ghost

    Space Ghost Member

    Joined:
    Feb 14, 1999
    Messages:
    18,220
    Likes Received:
    8,604
    So predictable. I saved the second part of my statement in anticipation of this response.

    -6 million jews (or whatever the number is) certainly didn't 'win'. I would say the Nazi's did a pretty bang up job with their objective.
    -Extreme authoritarian regimes always lose in the end. While I don't call the ultra left too extreme, they certainly are bitter they are no longer controlling the narrative on twitter.
     
  18. fchowd0311

    fchowd0311 Member

    Joined:
    Apr 27, 2010
    Messages:
    55,682
    Likes Received:
    43,473
    You can frame it that way or you can just say people are expressing their disagreement and giving reasons why. That's normal my friend.


    Anyone who sincerely cared about authoritarianism would be logically more fearful of an entire political party consistently denying the accuracy of the vote count even if they lose by millions of votes.


    That is scary. Not private companies curating content on their platforms. You want a solution to this? Have the conservative base of the country stop dismissing higher education and have more people with literacy PATIENCE to actually source information directly rather than the meme generator that is Twitter. That way private platforms don't hold as much power in media narratives because the population has literacy patience. It's harder to brainwash people with literacy patience.
     
    #38 fchowd0311, Dec 13, 2022
    Last edited: Dec 13, 2022
  19. dmoneybangbang

    Joined:
    May 5, 2012
    Messages:
    22,606
    Likes Received:
    14,341
    So…. You haven’t heard of that idiom before which is why you are confused by my cheeky statement?

    So what is your point exactly?

    Do we need more historical perspectives from the Nazi side?
     
  20. CCorn

    CCorn Member

    Joined:
    Dec 26, 2010
    Messages:
    22,316
    Likes Received:
    23,125
    Once upon a time, AroundTheWorld and Ron DeSantis decided to go on a date together. They started their day at the roller skating rink, where Ron impressed AroundTheWorld with his skill at skating backwards. After an hour or so of skating, they decided to grab some dinner at a Persian restaurant. The food was delicious, and the conversation was lively as they discussed everything from politics to their shared love of adventure.

    After dinner, they headed to Lake Ivanhoe to watch the sunset. As the sun began to dip below the horizon, they sat on the shore and enjoyed some German chocolate while they continued to chat. The evening was filled with laughter and good cheer, and by the end of the night, both AroundTheWorld and Ron had a great time on their date.
     
    FranchiseBlade and fchowd0311 like this.

Share This Page