1. Welcome! Please take a few seconds to create your free account to post threads, make some friends, remove a few ads while surfing and much more. ClutchFans has been bringing fans together to talk Houston Sports since 1996. Join us!

Tech leaders urge pause in 'out-of-control' AI race

Discussion in 'BBS Hangout: Debate & Discussion' started by Amiga, Apr 2, 2023.

  1. rocketsjudoka

    rocketsjudoka Contributing Member
    Supporting Member

    Joined:
    Jul 24, 2007
    Messages:
    53,958
    Likes Received:
    41,935
    At the minimum just talking about a pause on AI is raising awareness of the issue. Throwing up your hands and just allowing AI develop to continue unabated without any consideration of the potential negative consequences is what has gotten humanity into a lot of problems.
     
  2. rocketsjudoka

    rocketsjudoka Contributing Member
    Supporting Member

    Joined:
    Jul 24, 2007
    Messages:
    53,958
    Likes Received:
    41,935
    This is the problem that because so much is done with so little thought of long term consequences.

    It’s less the specific technology but this attitude thst will wipe us out. Whether it’s AI, genetic engineering or self replicating nanotechnology.
     
    B-Bob likes this.
  3. Amiga

    Amiga 10 years ago...
    Supporting Member

    Joined:
    Sep 18, 2008
    Messages:
    21,814
    Likes Received:
    18,607
    Microsoft Calls for A.I. Rules to Minimize the Technology’s Risks
    Its president, Brad Smith, said companies needed to “step up” and governments needed to “move faster” as artificial intelligence progressed.


    https://www.nytimes.com/2023/05/25/technology/microsoft-ai-rules-regulation.html


    Microsoft, which has promised to build artificial intelligence into many of its products, proposed regulations including a requirement that systems used in critical infrastructure can be fully turned off or slowed down, similar to an emergency braking system on a train. The company also called for laws to clarify when additional legal obligations apply to an A.I. system and for labels making it clear when an image or a video was produced by a computer.

    ..

    Sam Altman, the chief executive of OpenAI, which makes ChatGPT and counts Microsoft as an investor, told a Senate subcommittee this month that government must regulate the technology.

    ..

    He endorsed the idea, supported by Mr. Altman during his congressional testimony, that a government agency should require companies to obtain licenses to deploy “highly capable” A.I. models.


    “That means you notify the government when you start testing,” Mr. Smith said. “You’ve got to share results with the government. Even when it’s licensed for deployment, you have a duty to continue to monitor it and report to the government if there are unexpected issues that arise.”


    Microsoft added that governments should designate certain A.I. systems used in critical infrastructure as “high risk” and require them to have a “safety brake.” It compared that feature to “the braking systems engineers have long built into other technologies such as elevators, school buses and high-speed trains.”

    In some sensitive cases, Microsoft said, companies that provide A.I. systems should have to know certain information about their customers. To protect consumers from deception, content created by A.I. should be required to carry a special label, the company said.


    Mr. Smith said companies should bear the legal “responsibility” for harms associated with A.I. In some cases, he said, the liable party could be the developer of an application like Microsoft’s Bing search engine that uses someone else’s underlying A.I. technology. Cloud companies could be responsible for complying with security regulations and other rules, he added.

    “We don’t necessarily have the best information or the best answer, or we may not be the most credible speaker,” Mr. Smith said. “But, you know, right now, especially in Washington D.C., people are looking for ideas.”
     
    rocketsjudoka likes this.
  4. rocketsjudoka

    rocketsjudoka Contributing Member
    Supporting Member

    Joined:
    Jul 24, 2007
    Messages:
    53,958
    Likes Received:
    41,935
    We need regulation on the international level to deal with the development and use of AI. Given both the potential
    Promise and threat of AI we need to treat this like nuclear proliferation.
     
  5. tinman

    tinman Contributing Member
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    97,849
    Likes Received:
    40,442
    We need AI law enforcement of our cities and borders
    AI drones weaponized and having facial recognition software
     
  6. Amiga

    Amiga 10 years ago...
    Supporting Member

    Joined:
    Sep 18, 2008
    Messages:
    21,814
    Likes Received:
    18,607
    Global governance of human cloning is another example of international cooperation that helps drive individual countries to regulate cloning. We probably need both models: international cooperation on safeguarding the peaceful usage of AI and some type of international treaty to limit the proliferation of dangerous AI, including military usage of AI.

    OpenAI and Microsoft are major players, if not the biggest at this time. It starts here and will spread. The people in this industry realize they can't do it on their own and need robust gov regulation to limit the risks of AI.
     
    rocketsjudoka likes this.
  7. DonnyMost

    DonnyMost not wrong
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    47,397
    Likes Received:
    16,949
    Unfortunately we're going to have to see something really bad happen first.
     
    Buck Turgidson likes this.
  8. Buck Turgidson

    Joined:
    Feb 14, 2002
    Messages:
    85,500
    Likes Received:
    83,774
    and then, of course, it's too late
     
    rocketsjudoka likes this.
  9. DonnyMost

    DonnyMost not wrong
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    47,397
    Likes Received:
    16,949
    Maybe.

    I think something very consequential can happen without it being "too late". Maybe some deep fake disrupts a really important election or something.

    We haven't flirted with the whole self-imposed doomsday problem for most of our history, so the book is still out on how we can handle such responsibility.

    One thing I do know is that AI is effectively a key to power, and what we know about keys to power is that their desire is universal and evergreen. Nukes are the perfect analogy. We can sit around and talk about 'no more nukes' all we want, but the truth is every world leader knows that a nuclear weapon is an automatic ticket to a seat at the table with the big kids... so they will always be sought.
     
    Buck Turgidson likes this.
  10. Buck Turgidson

    Joined:
    Feb 14, 2002
    Messages:
    85,500
    Likes Received:
    83,774
  11. Invisible Fan

    Invisible Fan Contributing Member

    Joined:
    Dec 5, 2001
    Messages:
    43,295
    Likes Received:
    25,313
    I got caught up reading the links in the top quarter, but the rest is mostly sensible in terms of the immediate impact of AI in general.

    So I won't post that part...

    spectrum.ieee.org /gpt-4-calm-down
    Just Calm Down About GPT-4 Already
     
  12. DonnyMost

    DonnyMost not wrong
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    47,397
    Likes Received:
    16,949
    I'm talking about the length of time human civilization has been around vs. how long we've had access to the "end species" button.

    I'm aware we've had close calls. What I'm saying is that we don't have a lot of data points and the nuclear age is a percent of a fraction of a percent in terms of human history.
     
    Buck Turgidson likes this.
  13. fchowd0311

    fchowd0311 Contributing Member

    Joined:
    Apr 27, 2010
    Messages:
    47,647
    Likes Received:
    36,593
    I'm becoming frustrated with the way people express their fears of AI. AI will never be the downfall of society because it became sentient and decided to take over the human race. No, AI is dangerous because it is simply a tool that wealthy people can use to **** over poor people like any other tool.

    Already seeing this with pricing algorithms forming cartels and social media algorithms creating mentally unstable individuals because rich people found out that negative discourse is the best way to maximize engagement and therefore profits.
     
  14. Buck Turgidson

    Joined:
    Feb 14, 2002
    Messages:
    85,500
    Likes Received:
    83,774
    Gotcha, we're talking about 2 different things

     
  15. London'sBurning

    London'sBurning Contributing Member

    Joined:
    Dec 5, 2002
    Messages:
    7,205
    Likes Received:
    4,810
    Yea I forget which Sean Carroll podcast where he had an AI expert on that mentioned AI won't really be able to do much without a body to make approximate observations on newly presented data. Thought process is, say you want to mine the dark side of the moon for Helium-3. You do your satellite flybys, collect as much data as you can for your mission as can be had, run it through your AI to calculate best plans, do all the prep work you believe is necessary for the mission to be a success and upon landing on the spot thought best to land on the moon, you realize some new complication upon landing that you couldn't foresee until it was too late, and how a lot of progress on things is a constant updating of new information presented based on the real world that you didn't anticipate.

    So you can code AI to anticipate what to do in a scenario a real person or team of people did themselves and taught the AI how to troubleshoot, but when it comes to something completely new that it wasn't trained on that could only be learned by a physical presence there to detect the new thing to troubleshoot, it won't know what to do. However if it did have a body that was extremely sensitive to collecting data about the real world around it, it would be more autonomous and could respond on its own to its real world environment making it more in the mold of what living organisms are more or less pre-programmed to do in our own DNA. DNA that creates human cells with various individual functions that collectively make up our bodies, where for example, we have hands to manipulate the world with nerve endings that connects to our CNS that is connected to our brains which is already pre-programmed to monitor hunger, sleep, and environmental changes like smoke coming from burnt popcorn in the kitchen relayed from our senses to our brain; We are constantly being bombarded and updated with new environmental information our body communicates to our brain that then manages based around that.

    AI can't do that. It can't do that without a body with a CNS capable of describing the environment around it. It still needs input from people's decades of scientific research conducted in lab settings by people using their own physical bodies to walk around and conduct experiments that they then physically uploaded onto the internet that was then gathered for AI to be able to spout off for all to be impressed by.

    I think the more appropriate fear over AI is the quality of the data it receives and the real world context of that data that it by default is incapable of truly understanding, again because it's absent a body. As the Last Week Tonight on AI touched upon, if your data for example, has a racial bias to it, and you input that data to your AI, then your AI will also have a racial bias to it. That then makes it easier for bigots to wash their hands of something they'd benefit from. It's not them that's being bigoted. It's the AI. Think how landlords rationalize having algorithms based on rent increases being used an excuse to justify their actions.

     
  16. pgabriel

    pgabriel Educated Negro

    Joined:
    Dec 6, 2002
    Messages:
    42,755
    Likes Received:
    2,988
    tinman likes this.
  17. Amiga

    Amiga 10 years ago...
    Supporting Member

    Joined:
    Sep 18, 2008
    Messages:
    21,814
    Likes Received:
    18,607
    Yea well…

     
    tinman and Invisible Fan like this.
  18. tinman

    tinman Contributing Member
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    97,849
    Likes Received:
    40,442
    This is amazing !
     
  19. rocketsjudoka

    rocketsjudoka Contributing Member
    Supporting Member

    Joined:
    Jul 24, 2007
    Messages:
    53,958
    Likes Received:
    41,935
    We have a big problem with nuclear proliferation but it’s telling that other twice we haven’t used nukes. We haven’t even seen a dirty bomb used by non state actors even though there are a lot of groups that have tried.
     
  20. rocketsjudoka

    rocketsjudoka Contributing Member
    Supporting Member

    Joined:
    Jul 24, 2007
    Messages:
    53,958
    Likes Received:
    41,935
    Yes very much the danger of AI is how people will use it. That alone should give pause for ethics and regulation to develop around.

    The possibility of a sentient AI manipulating us while far fetched can’t be ruled out though and given we have so little
    Understanding of how AI works that is more reason for ethics and regulation.
     
    fchowd0311 likes this.

Share This Page

  • About ClutchFans

    Since 1996, ClutchFans has been loud and proud covering the Houston Rockets, helping set an industry standard for team fan sites. The forums have been a home for Houston sports fans as well as basketball fanatics around the globe.

  • Support ClutchFans!

    If you find that ClutchFans is a valuable resource for you, please consider becoming a Supporting Member. Supporting Members can upload photos and attachments directly to their posts, customize their user title and more. Gold Supporters see zero ads!


    Upgrade Now