1. Welcome! Please take a few seconds to create your free account to post threads, make some friends, remove a few ads while surfing and much more. ClutchFans has been bringing fans together to talk Houston Sports since 1996. Join us!

Tech leaders urge pause in 'out-of-control' AI race

Discussion in 'BBS Hangout: Debate & Discussion' started by Amiga, Apr 2, 2023.

  1. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    Sounds like two camps among the signee

    1- Tool will be abused for the worsening of society very quickly
    2- AI will wake up and destroy us all

    After GPT-4, tech leaders urge a 6-month pause in the AI race : NPR

    Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans?

    That's the conclusion of a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak who are calling for a 6-month pause to consider the risks.

    Their petition published Wednesday is a response to San Francisco startup OpenAI's recent release of GPT-4, a more advanced successor to its widely used AI chatbot ChatGPT that helped spark a race among tech giants Microsoft and Google to unveil similar applications.

    What do they say?
    The letter warns that AI systems with "human-competitive intelligence can pose profound risks to society and humanity" — from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.

    It says "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."

    "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the letter says. "This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."

    A number of governments are already working to regulate high-risk AI tools. The United Kingdom released a paper Wednesday outlining its approach, which it said "will avoid heavy-handed legislation which could stifle innovation." Lawmakers in the 27-nation European Union have been negotiating passage of sweeping AI rules.

    Who signed it?
    The petition was organized by the nonprofit Future of Life Institute, which says confirmed signatories include the Turing Award-winning AI pioneer Yoshua Bengio and other leading AI researchers such as Stuart Russell and Gary Marcus. Others who joined include Wozniak, former U.S. presidential candidate Andrew Yang and Rachel Bronson, president of the Bulletin of the Atomic Scientists, a science-oriented advocacy group known for its warnings against humanity-ending nuclear war.

    Musk, who runs Tesla, Twitter and SpaceX and was an OpenAI co-founder and early investor, has long expressed concerns about AI's existential risks. A more surprising inclusion is Emad Mostaque, CEO of Stability AI, maker of the AI image generator Stable Diffusion that partners with Amazon and competes with OpenAI's similar generator known as DALL-E.

    What's the response?
    OpenAI, Microsoft and Google didn't respond to requests for comment Wednesday, but the letter already has plenty of skeptics.

    "A pause is a good idea, but the letter is vague and doesn't take the regulatory problems seriously," says James Grimmelmann, a Cornell University professor of digital and information law. "It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars."

    Is this AI hysteria?
    While the letter raises the specter of nefarious AI far more intelligent than what actually exists, it's not "superhuman" AI that some who signed on are worried about. While impressive, a tool such as ChatGPT is simply a text generator that makes predictions about what words would answer the prompt it was given based on what it's learned from ingesting huge troves of written works.

    Gary Marcus, a New York University professor emeritus who signed the letter, said in a blog post that he disagrees with others who are worried about the near-term prospect of intelligent machines so smart they can self-improve themselves beyond humanity's control. What he's more worried about is "mediocre AI" that's widely deployed, including by criminals or terrorists to trick people or spread dangerous misinformation.

    "Current technology already poses enormous risks that we are ill-prepared for," Marcus wrote. "With future technology, things could well get worse."
     
  2. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,291
    Likes Received:
    47,176
    I disagree
    I want the AI to rule the world
     
  3. T_Man

    T_Man Member

    Joined:
    Jan 27, 2000
    Messages:
    6,863
    Likes Received:
    2,888
    I agree with this..

    T_Man
     
  4. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,291
    Likes Received:
    47,176
  5. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    Hysteria.

    It's human arrogance to think that AI will have some kind of need for survival or have human emotions - these things are uniquely human and we don't even understand through what mechanism we have these things in ourselves other than a billion years of evolution. To think AI will have the same straits is crazy. They don't have a "fear of death" or an idea of "greed" or "love".

    People watch way too many movies. AI isn't going to destroy us all unless they are programmed to do that. In which case it's not the AI but the programmer.

    The impact of AI will be determined in how it is used. Should AI be put in charge of our nuclear weapons? Obviously not. How will AI destroy us if they don't have access to WMDs? And how will they disrupt our lives if they aren't put into positions to disrupt our lives.
     
    astros123 likes this.
  6. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,291
    Likes Received:
    47,176
    Ai should be used in law enforcement and war
     
  7. DonnyMost

    DonnyMost Member
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    48,988
    Likes Received:
    19,926
    Their fears are 100% correct, but it's an absolute waste of time pursuing a "pause" (basically mutual disarmament).... the rewards that awaits those who manufacture the most capable intelligent automation is always going to be near infinite, so good luck fighting against that temptation.

    What we should do is come to some agreed upon standard of ethics around AI.
     
  8. London'sBurning

    Joined:
    Dec 5, 2002
    Messages:
    7,205
    Likes Received:
    4,817
    Sounds like billionaires with no investment in AI want to hinder the economic prosperity it'll provide for the companies with their existing AI and want an opportunity to play catch up delaying it.
     
    red likes this.
  9. Ottomaton

    Ottomaton Member
    Supporting Member

    Joined:
    Feb 14, 2000
    Messages:
    19,193
    Likes Received:
    15,352
    Open the pod bay doors, ChatGPT.
     
    Nook, DonnyMost and tinman like this.
  10. Andre0087

    Andre0087 Member

    Joined:
    Jan 16, 2012
    Messages:
    10,009
    Likes Received:
    13,666
    That last part is great in theory but getting countries to agree to it especially China isn’t going to happen.
     
  11. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,291
    Likes Received:
    47,176
    Woke trans person walks into the women’s bathroom
    Door doesn’t open
    Trans ‘unlock women’s bathroom door’
    AI ‘ you are not a woman’
    @Commodore
     
  12. DonnyMost

    DonnyMost Member
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    48,988
    Likes Received:
    19,926
    I also agree with this, but it's kinda hard to sleep at night if you don't even try.

    The one big caveat here is that China doesn't really innovate in this arena on their own... lol.
     
    Andre0087 likes this.
  13. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    Isn't it pretty arrogant to think that humans can control something they don't understand? The very experts who came up with the latest AI design and architecture have said for years now that they do not understand it. It's literally a black box to them. They stack transformer on top of transformer, and out comes some magic (magic because we don't know how it happens). That's already happening today. If we don't understand it, how can we ever predict what it will do? The chance of it simply destroying humans (for whatever reason) cannot be ruled out.
     
  14. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,291
    Likes Received:
    47,176
    Humans already destroy humans
     
  15. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    Because people are assuming that a bunch of transistor circuits will somehow mimic human vices????

    This is irrational fear. Humans feel the need to control things. If we can't control it, it's "dangerous" - like a wild animal. Why do we need to understand how it works or how to control it? What's the risk here? That a customer service AI is going to give bad information?

    What makes humans dangerous? It's the fear of death and desires. All the awful things humans do come from one of those two things.

    An AI won't fear death. It doesn't care about it's own existence. Even a super intelligent self aware AI has no vested interest in whether it lives or dies. It has no desires. No needs. Time doesn't pass the say way for an AI as it does for humans. One second is an incredible about of time for an AI compared to a human. They live our entire lifetimes in minutes.
     
    #15 Sweet Lou 4 2, Apr 2, 2023
    Last edited: Apr 2, 2023
  16. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,291
    Likes Received:
    47,176

    [​IMG]
     
  17. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
    I agree with improving ethical frameworks, but as for a moratorium, this is an arms race propelled by capitalistic and nationalistic intentions. Very hard to put the genie back in the bottle...

    The rebuttals of "full sentience" not around the corner for number 2 doesn't address concern number one in that it's a tool that can be readily abused, weaponized, profitized, and exploited.

    That is essentially the emotional component where AI doesn't single handedly assist in public destruction but serves as a super catalyst for it.

    You couldnt add hundreds of billions in market value with cloning and stem cell research, but google already paid that amount when ethical concerns and laziness made Bard look weak against competition.

    Gotta give the people what they want?
     
    Nook likes this.
  18. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    No, that's not it. "Bad information" is already baked in. Bad actors will use their own version of ChatGPT4 and more advanced versions to do bad things. That's pretty much a given. This is an area that could be mitigated through self and state regulation and standardized ethics. Still, there is a long way to go to get there, but it's relatively easy to accomplish. There are AI ethics teams at all these companies, and some governments have already started working on regulations. We can also react fairly quickly once we realize how damaging it can be as a tool used by "bad humans".

    Animals aren't smarter than us, so we aren't that concerned about them. The risk is with an entity that is much smarter than us that cannot be controlled. You can imagine it as an alien (just that it is created by humans). When an alien of superior intelligence shows up on Earth, can you guarantee it's nice to humans? Then why would you assume an AI of higher than human intelligence will be nice to humans? This isn't a new concept - AI scientists have talked about this since the 70s. But no one has taken it that seriously, probably simply because they think AI with that type of intelligence isn't going to be here for at least another 50 years, if ever. Now, it's not so clear.
     
  19. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    Bad actors can use tech to do bad things - yes. Bad actors can create programs to do bad things, they don't need AI. What is your actual fear is? Aliens are evolved life - not AI. I think once again, people don't really get what AI means and have a hollywood / sci-fi image of it. Don't let fear get in your way here.

    Try to actually articulate a realistic scenario that you are afraid of, and let's break that down. So you are afraid a bad actor will create an evil AI? To do what exactly? Invent evil things? Great space lazers that can wipe you out? I mean, what is is that you are afraid of happening exactly here? Do you think some evil person is going to create skynet and the military is going to put it in control of nukes?
     
  20. rocketsjudoka

    rocketsjudoka Member

    Joined:
    Jul 24, 2007
    Messages:
    58,167
    Likes Received:
    48,334
    I fully support a moratorium and increased regulations on developing AI. I don’t think the fear is if an AI becoming like Skynet and launching a nuclear strike and or building terminators to wipe us out but AI being used to create more sophisticated misinformation to manipulate society and politics.

    An AI designed to run misinformation campaigns and to improve and refine its misinformation could be a huge problem. In an age when many people are already refusing to accept even basic facts sophisticate targeted messaging could do a lot to erode trust and stoke fear in society. Manipulation of public opinion could not only be used for political campaigns but also for manipulating markets and other things that depend on mass perception. Such an AI could pose en grow beyond the control of its original designers and do as much damage to those who deployed it as their targets.

    Technology as is is already ahead of much of our regulations and ethics. We really need to get a handle on the potential of AI and what it means for the future of humanity.
     
    Nook likes this.

Share This Page